[TriLUG] wget, curl ???

James Jones jc.jones at tuftux.com
Mon Jul 21 09:43:00 EDT 2008


Robert,

Thanks for the information, but I don't want to download ( save ) any
pages, just want a list of the files there.

jcj

On Mon, Jul 21, 2008 at 8:02 AM, Robert Dale <robdale at gmail.com> wrote:
> On Mon, Jul 21, 2008 at 7:52 AM, James Jones <jc.jones at tuftux.com> wrote:
>> All,
>>
>> I want to capture a list of files on a website. I don't want to
>> download the files. I thought at first that wget would be the best for
>> this, but it appears that it will download the files.
>>
>> What would be the simplest way to achieve my goal?
>
> If you don't use the recursive option, they won't download the entire site.
>
> wget http://slashdot.org - will save the web page to a file 'index.html'
>
> curl http://slashdot.org - will print the web page to stdout
>
> --
> Robert Dale
> --
> TriLUG mailing list        : http://www.trilug.org/mailman/listinfo/trilug
> TriLUG FAQ  : http://www.trilug.org/wiki/Frequently_Asked_Questions
>



More information about the TriLUG mailing list