Discussion:
[john-users] How to use multiple lists at the same time?
Eric Oyen
2018-09-01 16:19:07 UTC
Permalink
Hello fellow users.

Ok, my situation is this: I use a 30+ GB file, which can be a real PITA to edit. I have used the split command (here on OS X) to split those down to more manageable sized (about 10 MB each) Sure, this will create a lot of files. What I want JTR to do is to be able to load each list in turn during the same session to process against hashes I have.

Now, I found an old entry in the archives that dates back to 2012 and tried it, but it failed with an error code. So, any way to do this without having to worry about a massively huge file that’s hard to move around (took me 2 hours to move it off my primary HDD to an external USB drive).

Thanks,

-Eric
Solar Designer
2018-09-01 16:36:12 UTC
Permalink
Hi Eric,
Post by Eric Oyen
Ok, my situation is this: I use a 30+ GB file, which can be a real PITA to edit. I have used the split command (here on OS X) to split those down to more manageable sized (about 10 MB each) Sure, this will create a lot of files. What I want JTR to do is to be able to load each list in turn during the same session to process against hashes I have.
JtR doesn't support use of multiple wordlist files in one session.
We might or might not add this feature later.

Meanwhile, you could work around this with commands like:

cat *.txt | ./john --stdin ...

or using "--pipe" in place of "--stdin".

However, I don't recommend this. What I recommend is that you simply
avoid generating wordlists this large. Especially not if you also
intend to edit them (why would you?!) Use a smaller wordlist, and use
JtR's wordlist rules, etc. to modify and extend it on the fly as needed.

I understand that with pre-applied rules you can filter out duplicates
(such as with our "./unique" program), which is about the only valid use
case for such large wordlists. If that's what you do, you should have
no need to edit the resulting wordlist.

Having a large generated wordlist _and_ wanting to edit it indicates
that you need to take a step back, and reconsider what you're doing.

IIRC, at some point in the past you mentioned you were using Crunch,
but perhaps you don't anymore? There's no valid use case for Crunch
along with JtR as we have equivalent or better functionality built in.

I hope this helps.

Alexander
Eric Oyen
2018-09-01 17:16:29 UTC
Permalink
Well, the problem I am finding is there are always more passwords to add and incremental would include stuff that hardly anyone uses. Mostly, I would just be adding to the end of the last file until it hits max size and then create another.

Also, I have been finding (through some reading) that folks are also starting to use URL’s as passwords and we know how long those can get.. btw, this is both part of my continued efforts to crack one of my own hashes, as well as start a project (that hopefully turns into a paying job soon).

I know that some of my methodology seems a bit archaic, but as they say, if it works, use it.

Btw, this is also part and parcel of some of my own education. I have IT skills that are a little rusty and at my age (53), I would still like to be able to get back into the workforce and not feel like I am playing catch up with the younger set. :)

-Eric
Post by Solar Designer
Hi Eric,
Post by Eric Oyen
Ok, my situation is this: I use a 30+ GB file, which can be a real PITA to edit. I have used the split command (here on OS X) to split those down to more manageable sized (about 10 MB each) Sure, this will create a lot of files. What I want JTR to do is to be able to load each list in turn during the same session to process against hashes I have.
JtR doesn't support use of multiple wordlist files in one session.
We might or might not add this feature later.
cat *.txt | ./john --stdin ...
or using "--pipe" in place of "--stdin".
However, I don't recommend this. What I recommend is that you simply
avoid generating wordlists this large. Especially not if you also
intend to edit them (why would you?!) Use a smaller wordlist, and use
JtR's wordlist rules, etc. to modify and extend it on the fly as needed.
I understand that with pre-applied rules you can filter out duplicates
(such as with our "./unique" program), which is about the only valid use
case for such large wordlists. If that's what you do, you should have
no need to edit the resulting wordlist.
Having a large generated wordlist _and_ wanting to edit it indicates
that you need to take a step back, and reconsider what you're doing.
IIRC, at some point in the past you mentioned you were using Crunch,
but perhaps you don't anymore? There's no valid use case for Crunch
along with JtR as we have equivalent or better functionality built in.
I hope this helps.
Alexander
Matt Weir
2018-09-01 18:39:14 UTC
Permalink
I sometimes find myself doing this when using lists of previously
cracked passwords. I like to keep the sources separate which equals a
lot of files. Admittedly there is a high amount of overlap so this is
usually not the optimal way to do things, but it works for me. I've
been mostly using john's '--pipe' command for this since I can cat the
files out, pipe them into JtR and still have mangling rules applied to
them. Just FYI the main difference between '--pipe' and '--stdin' is
that pipe allows you to apply mangling rules to incoming guesses,
stdin does not.

Another approach I used to use before '--pipe' was the 'xargs' command
on linux based systems. I'd need to dig into some of my old scripts
for examples with JtR, but here is a link to a page describing xargs
usage:

http://offbytwo.com/2011/06/26/things-you-didnt-know-about-xargs.html

Matt
Post by Eric Oyen
Well, the problem I am finding is there are always more passwords to add and incremental would include stuff that hardly anyone uses. Mostly, I would just be adding to the end of the last file until it hits max size and then create another.
Also, I have been finding (through some reading) that folks are also starting to use URL’s as passwords and we know how long those can get.. btw, this is both part of my continued efforts to crack one of my own hashes, as well as start a project (that hopefully turns into a paying job soon).
I know that some of my methodology seems a bit archaic, but as they say, if it works, use it.
Btw, this is also part and parcel of some of my own education. I have IT skills that are a little rusty and at my age (53), I would still like to be able to get back into the workforce and not feel like I am playing catch up with the younger set. :)
-Eric
Post by Solar Designer
Hi Eric,
Post by Eric Oyen
Ok, my situation is this: I use a 30+ GB file, which can be a real PITA to edit. I have used the split command (here on OS X) to split those down to more manageable sized (about 10 MB each) Sure, this will create a lot of files. What I want JTR to do is to be able to load each list in turn during the same session to process against hashes I have.
JtR doesn't support use of multiple wordlist files in one session.
We might or might not add this feature later.
cat *.txt | ./john --stdin ...
or using "--pipe" in place of "--stdin".
However, I don't recommend this. What I recommend is that you simply
avoid generating wordlists this large. Especially not if you also
intend to edit them (why would you?!) Use a smaller wordlist, and use
JtR's wordlist rules, etc. to modify and extend it on the fly as needed.
I understand that with pre-applied rules you can filter out duplicates
(such as with our "./unique" program), which is about the only valid use
case for such large wordlists. If that's what you do, you should have
no need to edit the resulting wordlist.
Having a large generated wordlist _and_ wanting to edit it indicates
that you need to take a step back, and reconsider what you're doing.
IIRC, at some point in the past you mentioned you were using Crunch,
but perhaps you don't anymore? There's no valid use case for Crunch
along with JtR as we have equivalent or better functionality built in.
I hope this helps.
Alexander
Jerry Kemp
2018-09-01 16:35:57 UTC
Permalink
I have split up list, like you discuss below, than ran each in a serial fashion via a
standard Korne shell script.

Worked fine for me, YMMV.

Jerry


-------- Original Message --------
From: Eric Oyen
Sent: Sat, Sep 1, 2018 11:19 AM CDT
To: john-***@lists.openwall.com
Subject: [john-users] How to use multiple lists at the same time?

Hello fellow users.

Ok, my situation is this: I use a 30+ GB file, which can be a real PITA to edit. I have used the split command (here on OS X) to
split those down to more manageable sized (about 10 MB each) Sure, this will create a lot of files. What I want JTR to do is to be
able to load each list in turn during the same session to process against hashes I have.

Now, I found an old entry in the archives that dates back to 2012 and tried it, but it failed with an error code. So, any way to do
this without having to worry about a massively huge file that’s hard to move around (took me 2 hours to move it off my primary HDD
to an external USB drive).

Thanks,

-Eric
Eric Oyen
2018-09-01 18:20:24 UTC
Permalink
You have a copy of that script I can take a look at?

-Eric
Post by Jerry Kemp
I have split up list, like you discuss below, than ran each in a serial fashion via a
standard Korne shell script.
Worked fine for me, YMMV.
Jerry
-------- Original Message --------
From: Eric Oyen
Sent: Sat, Sep 1, 2018 11:19 AM CDT
Subject: [john-users] How to use multiple lists at the same time?
Hello fellow users.
Ok, my situation is this: I use a 30+ GB file, which can be a real PITA to edit. I have used the split command (here on OS X) to
split those down to more manageable sized (about 10 MB each) Sure, this will create a lot of files. What I want JTR to do is to be
able to load each list in turn during the same session to process against hashes I have.
Now, I found an old entry in the archives that dates back to 2012 and tried it, but it failed with an error code. So, any way to do
this without having to worry about a massively huge file that’s hard to move around (took me 2 hours to move it off my primary HDD
to an external USB drive).
Thanks,
-Eric
Jerry Kemp
2018-09-01 23:57:02 UTC
Permalink
I'm hesitant to say as I am more embarrassed by what I have done than anything, but here it is in a nutshell.

When I first started playing with word list, I literally went out on the Internet, and grabbed every word list I could find. Along
with that, every time there was a big security leak, I went up, read and learned all I could, and (of course) grabbed any word list
I could associated with that particular leak.

I took all those lists, cat'ed them together, then "sort"-ed, then ran thru "uniq" to eliminate any duplicates. At the time, I
felt I was clever, but obviously, I ended up with a giant file too big to realistically edit and maintain. So I broke it down into
multiple parts, i.e.

wordlist.a
wordlist.b
wordlist.c
wordlist.d

aside from a "for" loop to loop thru each file, the basics of the script are:

#!/bin/ksh

tput clear
echo "starting wordlist.a"
john --wordlist.a --fork=62 ./password.file
echo "complete with wordlist.a"
echo " "
echo "starting wordlist.b"
AND SO ON.

Again, I would just stick all the word list files in a "for" loop. No ampersand here ( & ), I would just typically run it in a
serial fashion, where once one word list finished, the next one would start.

I have this running on a 64 proc/thread Sun/Oracle T4-1

t4 /root 568 # psrinfo | wc -l
64
t4 /root 569 #

And I will typically have john running when I am not using it for something else and use 62 for john and leave a couple alone for
when I need to do something small and don't want to disturb john.

Even with duplicate wordlist entries, probably the smart move for me would have just been to leave all my wordlist stock and alone.

Hope this helps,

Jerry





-------- Original Message --------
From: Eric Oyen
Sent: Sat, Sep 1, 2018 1:20 PM CDT
To: john-***@lists.openwall.com
Subject: [john-users] How to use multiple lists at the same time?

You have a copy of that script I can take a look at?

-Eric


On Sep 1, 2018, at 9:35 AM, Jerry Kemp <***@oryx.us> wrote:

I have split up list, like you discuss below, than ran each in a serial fashion via a
standard Korne shell script.

Worked fine for me, YMMV.

Jerry

Loading...