[TriLUG] Perl and SMP

Tim Jowers timjowers at gmail.com
Tue Jun 5 10:44:21 EDT 2007


I think processor scheduling is the most interesting thing about operating
systems.

In about 1997 I was on a server team that benchmarked SQL Server. It
basically did not scale above 4 SMP processors. They've probably done some
re-architecting since then. WebServer apps often allocate a pool of threads
(say 12 for a uniprocessor) as web is I/O limited.

Processors also pre-execute instructions in parallel. How? The assembly code
for both sides of a branch may be executed and then the results for only the
valid branch are used. This way the processor hopes to keep its pipeline and
function units in use at all time.

I haven't looked any at the memory architectures of the multi-cores but
assume the processors have a fast interconnect to a shared cache. SMP
operating systems like Linux already manage memory with respect to a
processor and already default to processor affinity for a process so the
speedup may not be significant. I can say using a dual core or other
multi-processor system does give a much nicer user experience. OTOH, one may
need the 80-core just to run all of the background corporate software! Not
to mention downloading TV calls, uploading camera images, tracking a buddy,
and other things people are trying to do now. In Linux, a thread is a
process for all scheduling concerns.  A task_struct is scheduled on the
processor and corresponds to a process or thread in userland.

I've forgotten what the overhead of SMP bookkeeping is but want to say it is
5% or so for the second processor and grows geometrically with each new
processor. Or maybe the 5% is the overhead for task switching. That would
mean more than 5% speedup should be expected due to parallelizing. I
speculate the cost of transferring data between processors is why pipelining
is so much more effective than multi-cores.

But then I read a web page of a 1987 Mac is a faster word processor than a
2007 Windows and realize software is becoming much more complex and being
written in a much less efficient manner. I guess it is "fast enough".

TimJowers

On 6/5/07, Andrew C. Oliver <acoliver at buni.org> wrote:
>
> Generally though defining "task" is up to you.  Especially where there
> are interactions with peripherals and ports.  That task is generally
> defined in some kind of thread.  In most OOP systems that thread is some
> kind of inherited object...
>
> -andy
>
> Andrew Perrin wrote:
> > I guess I'm just used to my computer being smarter than I am....
> >
> > To my mind, I imagine just about any task given to a computer can be
> > broken down into smaller tasks until these tasks are at the level of a
> > long series of processor instructions. I could see a sort of dispatch
> > system sending each of these instructions to a different processor in
> > line, then re-assembling the results as they came in. Again, I'm sure
> this
> > is hopelessly simplistic, and the process in question is now finished,
> so
> > it's all academic. But then again, I'm an academic....
> >
> > Andy
> >
> > ----------------------------------------------------------------------
> > Andrew J Perrin - andrew_perrin (at) unc.edu -
> http://perrin.socsci.unc.edu
> > Assistant Professor of Sociology; Book Review Editor, _Social Forces_
> > University of North Carolina - CB#3210, Chapel Hill, NC 27599-3210 USA
> > New Book: http://www.press.uchicago.edu/cgi-bin/hfs.cgi/00/178592.ctl
> >
> >
> >
> > On Mon, 4 Jun 2007, Tanner Lovelace wrote:
> >
> >> On 6/4/07, Andrew Perrin <clists at perrin.socsci.unc.edu> wrote:
> >>> Well, definitely *not* being a computer science type, I could imagine
> a
> >>> "smart" interpreter that would open as many threads as there were
> >>> processors, then assign particular tasks to each process and return
> them
> >>> to the main, thereby making the threading transparent to the
> user/process.
> >>> I assume this has been thought of and either implemented or rejected
> for a
> >>> good reason, but that's what I was thinking about.
> >>>
> >>> Andy
> >> Andy,
> >>
> >> That assumes there is more than one task.  What if there
> >> is only one?  Or what if there are several, but there are dependencies
> >> among them?  How is the computer supposed to know without
> >> someone telling it that?
> >>
> >> If it's a standard perl program, though, chances are that there is
> >> only one task and it goes from beginning to end.  It may happen
> >> over and over but once again, how is the computer supposed to
> >> know that without anyone telling it.
> >>
> >> Cheers,
> >> Tanner
> >> --
> >> Tanner Lovelace
> >> clubjuggler at gmail dot com
> >> http://wtl.wayfarer.org/
> >> (fieldless) In fess two roundels in pale, a billet fesswise and an
> >> increscent, all sable.
> >> --
> >> TriLUG mailing list        :
> http://www.trilug.org/mailman/listinfo/trilug
> >> TriLUG Organizational FAQ  : http://trilug.org/faq/
> >> TriLUG Member Services FAQ : http://members.trilug.org/services_faq/
> >>
>
>
> --
> Buni Meldware Communication Suite
> http://buni.org
> Multi-platform and extensible Email,
> Calendaring (including freebusy),
> Rich Webmail, Web-calendaring, ease
> of installation/administration.
> --
> TriLUG mailing list        : http://www.trilug.org/mailman/listinfo/trilug
> TriLUG Organizational FAQ  : http://trilug.org/faq/
> TriLUG Member Services FAQ : http://members.trilug.org/services_faq/
>



More information about the TriLUG mailing list