We are currently experiencing payment processing issues. Our team is working to resolve the problem as quickly as possible. Thank you for your patience
Python script to download from Fakku!
0
Nice script, sure beats my previous method of just downloading all the images with firefox addons and adding them to a zip
0
I crated a link grabber for fakku's main page.
It first looks for a file called oldlist in the same directory and reads the existing links
Then goes to fakku.net and gets the existing links there.
Finally deletes if needed and creates a file called list and adds all the links from fakku that are not in oldlist to list and oldlist
You may need to delete some of the first couple lines in oldlist once it grows big, but if you delete the whole thing it will append all of the links on the fakku home page to list.
http://pastebin.com/raw.php?i=JgDqQmnU
EDIT:
I also added a -e argument which only appends english contend,
a url argument so you can get links from a series,
and a -p argument to check multiple pages
eg.
./fakku_link_gabber
get the links to all unread content from the main page
./fakku_link_gabber -e
same as the last on but only english content
./fakku_link_gabber https://www.fakku.net/series/neon-genesis-evangelion -e -p27
get the links to the english content of the first 27 pages of NGE
It first looks for a file called oldlist in the same directory and reads the existing links
Then goes to fakku.net and gets the existing links there.
Finally deletes if needed and creates a file called list and adds all the links from fakku that are not in oldlist to list and oldlist
You may need to delete some of the first couple lines in oldlist once it grows big, but if you delete the whole thing it will append all of the links on the fakku home page to list.
http://pastebin.com/raw.php?i=JgDqQmnU
EDIT:
I also added a -e argument which only appends english contend,
a url argument so you can get links from a series,
and a -p argument to check multiple pages
eg.
./fakku_link_gabber
get the links to all unread content from the main page
./fakku_link_gabber -e
same as the last on but only english content
./fakku_link_gabber https://www.fakku.net/series/neon-genesis-evangelion -e -p27
get the links to the english content of the first 27 pages of NGE
0
I also made a link grabber, but i made it in visual basic, i'm a bit iffy about posting .executable files on here, so I'm just going to edit this post later, after i pull all the links from every page in the english and manga section. My program filters out the links on each page, and when you start from a page.... it will continue until you stop it.
It's slow so no lagg, and there's no link issues, also i fixed the program where it doesn't freeze. The only down side is it starts from page 2, and can not start from the front page or the first page of the section lol. I mainly made this because i wanted to pull old content and save it for later.
Tested on Windows 7. I'll post pastebins of links soon :) Eventually the program once i get permission from a mod.
It's slow so no lagg, and there's no link issues, also i fixed the program where it doesn't freeze. The only down side is it starts from page 2, and can not start from the front page or the first page of the section lol. I mainly made this because i wanted to pull old content and save it for later.
Tested on Windows 7. I'll post pastebins of links soon :) Eventually the program once i get permission from a mod.
0
Awesome script especially with using a list very efficient.
I do however have a few minor issues with it:
1: it doesn't download gifs it seems (I know who the fuck uses gifs in this day and age) these however do:
https://www.fakku.net/doujinshi/teme-benkyou-oshiero-yo-english
https://www.fakku.net/doujinshi/rin-ni-muchuu-english
2: if it encounters something it can't process in a list it halts which may be a bit more complex to solve, I don't know python so I wouldn't know
3: if the title or file name gets to long it seems to give out an error
Other then that its great. Even if fakku where to re add download links this would still be nicer.
I do however have a few minor issues with it:
1: it doesn't download gifs it seems (I know who the fuck uses gifs in this day and age) these however do:
https://www.fakku.net/doujinshi/teme-benkyou-oshiero-yo-english
https://www.fakku.net/doujinshi/rin-ni-muchuu-english
2: if it encounters something it can't process in a list it halts which may be a bit more complex to solve, I don't know python so I wouldn't know
3: if the title or file name gets to long it seems to give out an error
Other then that its great. Even if fakku where to re add download links this would still be nicer.
0
https://www.fakku.net/doujinshi/teitoku-donbgbt-touch-me-english
Doesn't seem to work,
H:\#Root\#New folder\yea\!temp>python fakku.py -l fakku_list.txt
Save: https://www.fakku.net/doujinshi/teitoku-donbgbt-touch-me-english
Traceback (most recent call last):
File "fakku.py", line 279, in
main(*sys.argv[1:])
File "fakku.py", line 268, in main
dl(line, zip_type, None, args.attempts)
File "fakku.py", line 124, in dl
print("Here: " + dir)
File "C:\Python33\lib\encodings\cp437.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_map)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u2019' in position
33: character maps to
Doesn't seem to work,
H:\#Root\#New folder\yea\!temp>python fakku.py -l fakku_list.txt
Save: https://www.fakku.net/doujinshi/teitoku-donbgbt-touch-me-english
Traceback (most recent call last):
File "fakku.py", line 279, in
main(*sys.argv[1:])
File "fakku.py", line 268, in main
dl(line, zip_type, None, args.attempts)
File "fakku.py", line 124, in dl
print("Here: " + dir)
File "C:\Python33\lib\encodings\cp437.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_map)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u2019' in position
33: character maps to
0
Andy29485 wrote...
I crated a link grabber for fakku's main page.It first looks for a file called oldlist in the same directory and reads the existing links
....
http://pastebin.com/raw.php?i=JgDqQmnU
...
If for some reason this don't work, install beautifulsoup4 and urllib3 (you can download them from the "Python Package Index") and you are ready to go.
And I have a question, is there a way to add an option to filter some tags?
Something like:
./fakku_link_gabber https://www.fakku.net/tags/color -e -p10 -Tfuta
to get the links to the english content of the first 10 pages of color releases without the futa/netorare/yaoi/trap/anythingyoudontlike tag.
Anyway thank you all for your great work!
[size=7]Sorry[/h] [size=6]for[/h] [size=5]my[/h] [size=4]english[/h]
0
It doesn't work for this for some reason: https://www.fakku.net/manga/lovehole-english
It throws this:
C:\Users\Tiff\Pictures\Manga\Fakku Downloader>fakku-downloader.py
Save: https://www.fakku.net/manga/lovehole-english
Traceback (most recent call last):
File "fakku.py", line 279, in
main(*sys.argv[1:])
File "fakku.py", line 275, in main
dl(args.url, zip_type, args.name, args.attempts)
File "fakku.py", line 124, in dl
print("Here: " + dir)
File "C:\Python32\lib\encodings\cp437.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_map)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u2665' in position
10: character maps to
These are what I'm using:
fakku-downloader.py - http://pastebin.com/g7eZHh6Y
fakku.py - http://pastebin.com/rmuiBnyZ
It throws this:
C:\Users\Tiff\Pictures\Manga\Fakku Downloader>fakku-downloader.py
Save: https://www.fakku.net/manga/lovehole-english
Traceback (most recent call last):
File "fakku.py", line 279, in
main(*sys.argv[1:])
File "fakku.py", line 275, in main
dl(args.url, zip_type, args.name, args.attempts)
File "fakku.py", line 124, in dl
print("Here: " + dir)
File "C:\Python32\lib\encodings\cp437.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_map)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u2665' in position
10: character maps to
These are what I'm using:
fakku-downloader.py - http://pastebin.com/g7eZHh6Y
fakku.py - http://pastebin.com/rmuiBnyZ
0
camino-sin-retorno wrote...
Andy29485 wrote...
I crated a link grabber for fakku's main page.It first looks for a file called oldlist in the same directory and reads the existing links
....
http://pastebin.com/raw.php?i=JgDqQmnU
...
If for some reason this don't work, install beautifulsoup4 and urllib3 (you can download them from the "Python Package Index") and you are ready to go.
And I have a question, is there a way to add an option to filter some tags?
Something like:
./fakku_link_gabber https://www.fakku.net/tags/color -e -p10 -Tfuta
to get the links to the english content of the first 10 pages of color releases without the futa/netorare/yaoi/trap/anythingyoudontlike tag.
Anyway thank you all for your great work!
[size=7]Sorry[/h] [size=6]for[/h] [size=5]my[/h] [size=4]english[/h]
Sorry for taking so long to reply. I made a few changes to the grabber and downloader scripts.
1) I removed the need for urllib3 and beautifulsoup4 so you no longer need to install them for the script(s) to work.
2) For the url you need to add a -u or --url in front of it for the option to work.
e.g. ./fakku_link_gabber -u https://www.fakku.net/tags/color
3) Added a -n/--no option for the tags that you do not want, must be lowercase separated by a space.
e.g. ./fakku_link_gabber -n futa netorare yaoi trap
BUT if you do
./fakku_link_gabber -n futa netorare -e -n yaoi trap
then it will only exclude the second two tags(yaoi trap).
4) At some point(a while ago) when fakku.net was down I added compatibility for pururin.com. If you do not want links to be grabbed from pururin just consider it another tag
e.g. ./fakku_link_gabber -n pururin (other tag(s) you don't want)
5) There is a -t/--timeout option, if the webpage takes a while to load for some reason, this will help.
6) There is a -v/--verbose option, it will print out what the grabber is doing in depth.
7) There have seem to of had been some problems with downloading, such as gifs and Unicode characters in the title, those should be solved in the downloader now but I will not promise anything. Also the downloader will download pururin urls and save them in the same way it save the fakku ones.
8) And lastly, there is a -g option this will create a gui so that you can sort which doujinshi/manga you want to grab, which you want to open in the web browser, and which you don't want at all. You will need to install qt4 for your computer(not for python) and PySide for python. I would give more specific instructions but, I don't use window all that often. I would have used tkinter(the one used in the downloader), but it only accepts gif for images and I wanted to display the images. It is an option so you don't necessarily need to install the things listed above for the script to work(without the gui).
Wow that was a lot, anyway here are the files:
grabber
downloader
EDIT:
Sorry there were some mistakes in the scripts, should be fixed now.
EDIT 2:
Spelling errors.
0
Andy29485 wrote...
camino-sin-retorno wrote...
Andy29485 wrote...
I crated a link grabber for fakku's main page.It first looks for a file called oldlist in the same directory and reads the existing links
....
http://pastebin.com/raw.php?i=JgDqQmnU
...
If for some reason this don't work, install...
Sorry for taking so long to reply. I made a few changes to the grabber and downloader scripts.
...
Thank you!
0
waterflame
FAKKUDL.NET
If I remember right there was a format change at some point so unless it was updated then it will fail.
0
I changed 2 of the regexp patterns and the script seems to work again.
More specifically, I changed the re_pages and re_series patterns.
final file: http://pastebin.com/xv4FJ3JE
Edit: Restored the title format functionality. I forgot I randomly changed it because I wasn't sure how it worked back then.
Final File: http://pastebin.com/ia5B9fdc
http://pastebin.com/v0quvgLp
More specifically, I changed the re_pages and re_series patterns.
final file: http://pastebin.com/xv4FJ3JE
Edit: Restored the title format functionality. I forgot I randomly changed it because I wasn't sure how it worked back then.
Final File: http://pastebin.com/ia5B9fdc
http://pastebin.com/v0quvgLp
1
I am not sure how many changes I made since the last time I updated but, there were at least these three(in order of importance):
Links
grabber
downloader
If the scripts don't work, include the error output and I'll try to fix them.
Miscellaneous
I am looking into Calibre, a program for organizing e-books. Maybe I'll try to get the downloader to add tags and the such to organize it, but I'll most likely drop the idea.
EDIT:
Well, I actually really liked calibre. And added(very shaky) support for adding books to calibre after they downloaded, another feature that was added was sorting using hard links(for more info read comment at the top of the linked file). Any I created a new past on pastebin here. Only click If you have experience with python and are willing to test/debug this script. Yea, I also(kinda sorta) added a bunch of comments and TODOs - to make it a bit easier for other people to read my messy code. PM me or post a reply on this topic if you have comments/suggestions/patches to the script - I can't promise how fast I'll see them though.
- Will now work with the updated fakku site.
- Which page is downloading will print on one line(I don't know how else to explain it, but it looks cooler now - try it and see).
- Added -c option to use ".cbz" extension instead of ".zip" - why? It changed the icon of the file to the first image of the archive(cover) on my computer(I cannot garenty that this will work on yours) so if you want this to happend, use -zcd(yes it is possible to combine the "-z -c -d" flags like that) instead of -zd.
Links
grabber
downloader
If the scripts don't work, include the error output and I'll try to fix them.
Miscellaneous
I am looking into Calibre, a program for organizing e-books. Maybe I'll try to get the downloader to add tags and the such to organize it, but I'll most likely drop the idea.
EDIT:
Well, I actually really liked calibre. And added(very shaky) support for adding books to calibre after they downloaded, another feature that was added was sorting using hard links(for more info read comment at the top of the linked file). Any I created a new past on pastebin here. Only click If you have experience with python and are willing to test/debug this script. Yea, I also(kinda sorta) added a bunch of comments and TODOs - to make it a bit easier for other people to read my messy code. PM me or post a reply on this topic if you have comments/suggestions/patches to the script - I can't promise how fast I'll see them though.
0
We need a .exe of this, like an actual program to rip whole chapters if you just paste the url page of all the thumbnails of the doujin you want.
0
waterflame
FAKKUDL.NET
You could try http://www.pyinstaller.org/ if the python version you use is between 2.4 and 2.7 apparently it makes .exe's out of your python scripts
I could update my old program :p / chrome extension if worse comes to worse
I could update my old program :p / chrome extension if worse comes to worse