由买买提看人间百态

topics

全部话题 - 话题: ipython1
(共0页)
l*****c
发帖数: 1153
1
来自主题: Programming版 - 刚开始看python
OK. I read a little about IPython1. It uses MPI to implement the
architecture. Gonna to read more this weekend to see if it is worthy to
implement one to accommodate Hadoop, which I would expect to be much less
work and more handful than implement IPython1.
r****t
发帖数: 10904
2
来自主题: Programming版 - python一问
for i in range(200):
....os.system("qsub -l ...")
我已经 give up on pbs 了,这种问题现在用 IPython1 来做,如果输出的结果数据不是很大的话
....out = client.map(func, [file001, file002,... file200])
就行了。
如果输出数据太大,觉得这种事最好用 hadoop 来做,还没用过。
r****t
发帖数: 10904
3
来自主题: Programming版 - 刚开始看python
You are welcome to look into python. :)
There's multithreading, but you cannot utilize multi-core using one process,
no matter how many threads you have, because of GIL problem.
One has to look somewhere else if you are to use default CPython, like
processing module, or IPython1 to get parallelized map() operations.
Or use mpipython etc etc...
r****t
发帖数: 10904
4
来自主题: Programming版 - 刚开始看python
I dnnot know a map() implemented with hadoop.
But there's such things in IPython1. or maybe "processing" module that's
going to enter standard library since python2.6. If processing is in
standard library, there's hope to see a map() that makes use of SMP...

python,
h***z
发帖数: 233
5
来自主题: Programming版 - How to use multi-core to speed Python program
You can also try Stackless and IPython1.

clusters
Interpreter
r****t
发帖数: 10904
6
来自主题: Programming版 - How to use multi-core to speed Python program
不一定直接就要用到 parallel python, 我不知道 parallel python 现在发展怎么样
没法说。
但是在标准的 CPython 里面也可以用 threading interface 来使用 multi-core, 办
法就是用 processing module, 这个 module 会进入 python2.6, 它提供 python
threading module 的语法,底层使用 process 来 utilize multiple CPUs. 如果你的
code 不仅仅是你自己用的话,最好使用这个。
threading method 因为没法 scale to multiple CPUs across machines, with GIL
的原因,在 python world 里面没有得到采用。 python answer 是使用进程。使用进
程来 parallel 的有 IPython1 + mpi
(共0页)