[TriLUG] High resolution timer calls and the kernel
ed at eh3.com
Tue Jan 13 13:21:21 EST 2004
On Tue, 2004-01-13 at 12:41, Sam Kalat wrote:
> I have a real-time computer vision project I'm working on that does some
> analysis of incoming 10 fps video from a pair of webcams. The cycle of the
> cameras is reliable so each frame comes in 100 msec after the last. The only
> exception is the first frame which seems to take a while as the camera
> So what I want to do is as much processing as possible on the existing frame
> before switching to the next frame. I have an algorithm that can be cut
> short and still be useful. Rather than alter the parameters to get the frame
> rate I want, I have the frame rate and want to tune the parameters in
> real-time to get as much done as possible without exceeding a 100 msec
> I ran into a few strange things when it came to keeping track of the time in
> small increments. Was hoping someone could explain.
> I think the timer call I used was gettimeofday(). One call takes about 2
> usec, which is small compared to the 1300 usec or so for one frame of video
> capture, or the curfew of 100 msec = 100000 usec.
> If I make a loop of repeated calls for the time, however, the duration of
> these calls increases. I made a counter for how many times I could call for
> the time before breaking 100 msec, along with capturing video at that rate.
> It looks like this:
> Frame Loops
> 0 1
> 1 399935
> 2 383604
> 3 367687
> 21 1178
> 22 1
> 23 1
> So if you do nothing but ask for the time, like an annoying kid screaming "are
> we there yet?", the response goes from immediate to quite slow. I don't
> think this happens when there is real processing that puts some delay between
> the calls. I tried to simulate this with nanosleep() but that actually
> sleeps a good bit more than I requested, so I didn't get very far with a
> control to compare to.
The "slowdown" you're seeing most likely is due to a combination of
scheduling of your user-space process and "other things" (eg. I/O) that
your processes is trying to do. Remember, there are probably dozens of
processes running on your system and they have to share CPU resources.
See the email discussion thread at
for more discussion and some code.
Also, the "are we there yet" way of dealing with time is mighty
inefficient! Have you thought about using some sort of interrupt
mechanism so that you can cleanly "cut short" the results from the
analysis of one frame _only_ when you get a signal that the next frame
is ready? That way, you won't have to explicitly check the time.
Edward H. Hill III, PhD
office: MIT Dept. of EAPS; Room 54-1424; 77 Massachusetts Ave.
Cambridge, MA 02139-4307
email: eh3 at mit.edu, ed at eh3.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 189 bytes
Desc: This is a digitally signed message part
Url : http://www.trilug.org/pipermail/trilug/attachments/20040113/ddc5bf71/attachment.bin
More information about the TriLUG