m******t 发帖数: 2416 | 1
Untested but something like this:
find . -type f -exec sed -i 's/john/TOM/g' \{} \;
or
find . -type f | xargs sed -i 's/john/TOM/g' |
|
s*****o 发帖数: 1262 | 2 用 $locate xxx 可以找到全部的xxx文件
如果其中一个今天刚刚修改了,然后我想update everybody with newest version,怎
么弄?
批量删除倒是可以用:
$locate xxx > file
$xargs rm < file |
|
g**e 发帖数: 6127 | 3 find . -name \*.java |xargs cat|wc -l |
|
p***o 发帖数: 1252 | 4 echo -> scanf/gets
xargs -> argc/argv
the |
|
r*********r 发帖数: 3195 | 5 the visiting programmer 有找master foo 这闲工夫,
不如把 unix command 的源程序都看一遍。
像什么 find, xargs, sort之类的,解释起来很费劲。
很多书费了半天劲,举了若干垃圾例子,还是讲不清楚。
源代码看一下其实很简单。 |
|
d****n 发帖数: 1637 | 6 1. find a file by name/pattern
2. find a exact word/pattern in text file, counters of this pattern/word.
exclude this word/pattern in this file and do the same
3. sort a table(numbers) print unique counts by x column.
4. replace/insert/delete pattern/word in a text file.
5. rename/move/copy files from prefix/suffix "xxx" to "yyy"
6. given a table with numbers, calculate the average/sum/stdv at column x,y
,z
7. how/how many ways to detach a process when the process is running? before
it starts?... 阅读全帖 |
|
t****a 发帖数: 1212 | 7 不要用python,用awk就够啦!awk是linux里处理csv,tsv之类的最好最方便的工具,不
用费力气去写python程序。google一下awk的wiki就知道怎么用了。
另外你的文件很大,我猜测你用的是多CPU的linux服务器。这种情况下并行处理可能会
更快一些。我知道的办法是split成小文件以后,ls|xargs awk...|cat 再加并行的参
数 (好像是-P)之后可以合并结果文件。
; |
|
|
|
h**********c 发帖数: 4120 | 10 我老一行shell 轻轻松松,shake it off
git status | grep modified | xargs -I{} rm {}
made |
|
|
p*a 发帖数: 592 | 12 ls -1 *.cpp | sed 's/\(.*\)\.cpp/\1\.cpp \1\.C/' | xargs -n2 mv |
|
p*a 发帖数: 592 | 13 ls -1 | awk '{print ("\"")($0)("\"")(" ")("\"")(tolower($0))("\"")}' |
xargs -n2 mv |
|
m****m 发帖数: 165 | 14 偶也来凑热闹,呵呵
find . -type f -print | xargs grep -i "yourstring" /dev/null |
|
p*a 发帖数: 592 | 15 ps -aef | grep cogt | grep server | awk '{print $2}' | xargs -i kill -9 {} |
|
m*******m 发帖数: 182 | 16 find . -name '*.[do]' -print | xargs rm
If in NT, why not do a search for '*.d' and '*.o' files, and
delete them in the result window? |
|
m*******m 发帖数: 182 | 17 Directly copying stuff over would probably not work.
You will run into similar dynamic link problems.
That said, to find a file foo, and look for string bar iin
it,
you do
find /usr -name foo -print | xargs grep bar |
|
t*********l 发帖数: 30 | 18 I think this won't work. should be
rm `ls -1 | xargs grep -l mystr` |
|
w*g 发帖数: 14 | 19 ls img*.jpg | sed -e 'p' -e 's/jpg/JPG/' | xargs -n 2 mv |
|
D**e 发帖数: 10169 | 20 find ./ -print | xargs grep user "{}" |
|
j********e 发帖数: 27 | 21 In Cygwin, I use
find . -name "*.*" | xargs grep "pattern"
Not sure whether it works under Solaris |
|
s**s 发帖数: 242 | 22 crontab -e设置你的定时运行的工作
man crontab有例子的:
EXAMPLES
Example 1: Cleaning up core files
This example cleans up core files every weekday morning at
3:15 am:
15 3 * * 1-5 find $HOME -name core 2>/dev/null | xargs rm -f
Example 2: Mailing a birthday greeting
0 12 14 2 * mailx john%Happy Birthday!%Time for lunch.
Example 3: Specifying days of the month and week
This example
0 0 1,15 * 1
would run a command on the first and fifteenth of each
mo |
|
a****n 发帖数: 48 | 23 find . -name foo |xargs ls -lat |
|
m**c 发帖数: 90 | 24
Try to see if "sed" can help:
sed -i'.bak' -e 's/^\([^\/]*\)\/\{2,\}\(.*\)$/\1\/*\2*\//g'filename.c
Above command will repalce "//..." to "/*...*/" in filename.c (and make a
backup of "filename.c" named "filename.c.bak").
To replace all files, use "find" and "sed" together:
find -iname '*.c' -print0 | xargs -0 sed -i'.bak' -e
's/^\([^\/]*\)\/\{2,\}\(.*\)$/\1\/*\2*\//g' |
|
t******q 发帖数: 117 | 25 #!/bin/bas
#assum all programs are under your current dir. all executable
progset = `ls -l`
for prog in $progset;
do
./$prog&;
sleep 900;
ps aux | grep $prog | gawk '{print $2}' >> prog.log
ps aux | grep $prog | gawk '{print $2}' | xargs kill
done
不
。
程 |
|
|
h*******y 发帖数: 896 | 27 多谢大家的回复,另外我找到了个很方便的方法:
ls *.ps | xargs -n1 ps2pdf |
|
d*****y 发帖数: 1365 | 28 【 以下文字转载自 DataSciences 讨论区 】
发信人: donaghy (I am an NBA referee), 信区: DataSciences
标 题: Data science/Quant analysis positions
发信站: BBS 未名空间站 (Tue Mar 3 21:38:32 2015, 美东)
We are seeking Quantitative Research Analysts who will work as Financial
Engineers, Data Scientists, Risk Analysts or Software Developers in our
Advanced Data Analytics Unit (ADAU) at Financial Industry Regulatory
Authority (FINRA).
ADAU is a newly formed unit at FINRA to leverage on data analytics to
effectively examine and c... 阅读全帖 |
|
d*****y 发帖数: 1365 | 29 【 以下文字转载自 DataSciences 讨论区 】
发信人: donaghy (I am an NBA referee), 信区: DataSciences
标 题: Data science/Quant analysis positions
发信站: BBS 未名空间站 (Tue Mar 3 21:38:32 2015, 美东)
We are seeking Quantitative Research Analysts who will work as Financial
Engineers, Data Scientists, Risk Analysts or Software Developers in our
Advanced Data Analytics Unit (ADAU) at Financial Industry Regulatory
Authority (FINRA).
ADAU is a newly formed unit at FINRA to leverage on data analytics to
effectively examine and c... 阅读全帖 |
|
d*****y 发帖数: 1365 | 30 We are seeking Quantitative Research Analysts who will work as Financial
Engineers, Data Scientists, Risk Analysts or Software Developers in our
Advanced Data Analytics Unit (ADAU) at Financial Industry Regulatory
Authority (FINRA).
ADAU is a newly formed unit at FINRA to leverage on data analytics to
effectively examine and conduct surveillance of market participants in
mainly Broker-Dealer space. ADAU staff will support the examination and
surveillance efforts by closely aligning with internal... 阅读全帖 |
|