(keitai-l) Re: Individual Video

From: Christian Molstrom <cmolstrom_at_lightsurf.com>
Date: 03/12/02
Message-ID: <004c01c1c9ca$7badf7a0$c56510ac@office.lightsurf.com>
>
> keitai-l@appelsiini.net writes:
> >I didn't get it.  The point is to squeeze the data
*before*
> >it gets to the client.
>
> I took the point to be that one needs LESS data if one
simply paints the
> fovea rather than the whole retina as one need only paint
the portion of
> the scene (the "screen") that would hit the fovea if the
eye were looking
> at that portion. Hence one would need to send only the
data that
> represented that portion of the screen. So - LESS DATA. A
genuinely
> virtual screen. Other people (whose fovea was not being
tracked by the
> laser) would see a mess.

Oh yeah I got that.  But then I didn't.  I think your
comment below:

> Latency of feedback of eye movement is the main problem I
suspect as this
> would have to be fed back to the server so that a
different data stream
> was sent (that represented a different part of the
screen). I wonder what
> kind of latency would be acceptable... Human eyes do not
generally move
> THAT fast I guess...

is what I was talking about.  It seems like this--if I
understand correctly--would require some kind of
near-instant human eye-image server telepathy.  Nice concept
anyway.  Perhaps (highly doubtful) he was thinking that
there would be some sophisticated frequency cancellation as
raw image data collided with fovea data transmitted by the
viewing client.  What was not cancelled was the foveal spot.

Besides, would I want someone looking over my should to know
what part of an image I was viewing?

Christian
Received on Tue Mar 12 15:39:06 2002