m***t 发帖数: 254 | 1 被迫要用java编译一个servlet,发现自己办公室的FC5没有j2sdk,去javasoft居然没
有可以直接wget的地址,只好用
家里的laptop慢慢当,TNND. |
|
h*********s 发帖数: 5 | 2 how about wget/fetch
then some tools like perl/python can be used |
|
P********e 发帖数: 2610 | 3 what's the difference from wget |
|
|
p****s 发帖数: 32405 | 5 server在两台不同的机器上都架了,情况相同。
另外,client端用ftp和wget都测试了下, 均是time out无疾而终。:( |
|
p****s 发帖数: 32405 | 6 现在, 应该把问题更加细节化一点了。
老实说,time out无疾而终的情况只出现于无线传输的情况下,这是为什么前面说即使
ping得通server, 也有几十毫秒延迟的情形。
如果我们不走无线传输,client和接在局域网里的机器做ftp或wget测试都没问题,
很顺利文件就传成了。
但我们的目的要的就是证明ftp protocol在无线下也能通。。。 |
|
|
m******t 发帖数: 2416 | 8
A shell script that calls wget? |
|
w***g 发帖数: 5958 | 9 #!/bin/bash
#
# I use this to get stock data from Yahoo.
# opentick is a good choice if you want more than this.
#
if [ -z "$1" ]
then
printf "Fetch historical data from yahoo.com.\n"
printf "Usage:\n\t%s [output]\n\n" $0
exit
fi
wget --quiet -O $2 "http://ichart.finance.yahoo.com/table.csv?s=$1&g=d"
n=`head -1 $2 | grep "Date,Open,High,Low,Close,Volume,Adj Close" | wc -l`
if [ $n -ne 1 ]
then
rm $2
printf "Error fetching the symbol \"$1\".\n" |
|
b******n 发帖数: 592 | 10 try wget. I know google blocks requests sometimes. |
|
c**t 发帖数: 2744 | 11 curl, wget are your friends |
|
r****o 发帖数: 1950 | 12 in linux, can use system ("wget "+website) |
|
c*****t 发帖数: 1879 | 13 Several approaches:
1. greasemonkey (firefox addon).
2. use wget
3. use java along with htmlunit library |
|
c**t 发帖数: 2744 | 14 取决于网页的结构。如果是applet,或者是flash的form,除非你找到后门,基本上无解
。
如果是简单的web form, 命令行可用wget, curl,或者复杂点的用 perl, java, .Net都
能解决问题 |
|
h********g 发帖数: 116 | 15 是普通的asp 的form
curl, wget可以处理中文吗?
Net都 |
|
h**o 发帖数: 548 | 16 我是要编一段程序(shell or perl) 看snoop里有无我要的字符串. 不是用眼睛看. 所
以winshark 和
ethereal 都不适和我.
假设我的snoop 结果 是 a.snoop,前文说的snoop.txt是我用
snoop -i a.snoop -x0 > snoop.txt 得到的. 但snoop.txt中可读的文本在最右列, 我
还得某种方
式把可读的文本取出来再search我要的字符串. 麻烦.
我现在想了一个别的办法:
snoop -i a.snoop -v > a1.txt, 这样我可以从a1.txt里得到packet headers 信悉了(而
我要找的字符串 就应在http header 里).如:
TCP: No options
TCP:
HTTP: ----- HyperText Transfer Protocol -----
HTTP:
HTTP: GET /beast_uns/index.php HTTP/1.0
HTTP: User-Agent: Wget/1.11.1 (Red Hat modified)
HTTP: Accept: |
|
b******n 发帖数: 592 | 17 Perl or Python. Never touch C/C++ for this kind of task. You can even use
bash for this kind of task: grep + wget |
|
j****a 发帖数: 42 | 18 想写个小程序自动查询一个网站,比如查两地之间的机票价格,
大家一般是怎么做呢?就用wget或者curl吗?
另外如果一些网站隐藏了参数,那么怎么figure out应该发送
什么样的query string呢?
这方面一点经验都没有,还请大牛指点一下。谢谢 |
|
p***o 发帖数: 1252 | 19 You can read the source code of wget, especially those
handling --continue. |
|
|
b******n 发帖数: 592 | 21 perl is okay if you use your own code and the script is only a few lines.
for text processing, it is still easier to use than Python. If you are
into regex, try to download kiki or other regex editor.
sed/awk are typical unix tools. Unix shell script is okay for text
processing,
but if you want to move things around, extract data into different formats,
more advanced script such as Perl or Python suits better than Shell.
Either way, learn a scripting language is not a bad idea. For downloading
w... 阅读全帖 |
|
e****d 发帖数: 895 | 22 Last trade for XOM and MSFT
q)d:{flip`sym`trade!("SF";csv)0:system"wget 'http://finance.yahoo.com/d/quot
es.csv?s=",("+"sv x),"&f=sl1' -O -"}("XOM";"MSFT")
q)d
sym trade
-----------
XOM 82
MSFT 26.025 |
|
j******n 发帖数: 271 | 23
Try wget or curl, both of which are Open Source. |
|
c*****m 发帖数: 1160 | 24 来自主题: Programming版 - 请教程序 要看你的这个网站,以及“登陆”,用什么技术。如果是flash,就一定要鼠标精灵一
类的软件;如果是老式的登陆,甚至wget都可能搞定。 |
|
d****n 发帖数: 1637 | 25 1. find a file by name/pattern
2. find a exact word/pattern in text file, counters of this pattern/word.
exclude this word/pattern in this file and do the same
3. sort a table(numbers) print unique counts by x column.
4. replace/insert/delete pattern/word in a text file.
5. rename/move/copy files from prefix/suffix "xxx" to "yyy"
6. given a table with numbers, calculate the average/sum/stdv at column x,y
,z
7. how/how many ways to detach a process when the process is running? before
it starts?... 阅读全帖 |
|
|
d*****t 发帖数: 7903 | 27 貌似可以,正在学习,wget可以搜索特定文件吗?
多谢! |
|
d*****t 发帖数: 7903 | 28 粗看一下和wget差不多,能下载指定文件吗?我只需*.xml |
|
t****a 发帖数: 1212 | 29 你首先要匹配到下载链接,然后一个个下载。
匹配下载链接,首先要解析html。当然你愿意不解析自己搞定也行。
如果用python的话,试试看beautifulsoup吧。
匹配到下载链接后,可以用linux的curl, wget。python的urllib2(或者libcurl?)也行。 |
|
|
|
d*****t 发帖数: 7903 | 32 谢谢,但什么是匹配啊?wget不是可以直接recersive download吗?
行。 |
|
|