Discussion:
VNC vs X over slow links
Cary B. O'Brien
1998-05-21 17:32:58 UTC
Permalink
Hi,

I would like to know about people's experiences running VNC over slow
links, esp in comparison to running X over the same link.

We have a customer that is trying to run an X app over a satellite[1]
link. Plenty of bandwidth, but bad latency. The app takes *forever*
to come up. Tens of minutes. Which makes it kind of unusable. All
the little X transactions multiplied by the satellite delay is what
I think is causing the problem.

It seems to me that the VNC protocol might be better in such a situation,
since it would group up the window paints into larger 'chunks'.

Has anyone had any experience in this kind of situation? Any results,
good or bad?

Thanks in advance,
Cary O'Brien
***@access.digex.net



[1] I guess you have to explicitly say Geosyncronus now, with all the LEO
and MEO stuff around.
Edward Avis
1998-05-21 17:42:42 UTC
Permalink
Post by Cary B. O'Brien
We have a customer that is trying to run an X app over a satellite[1]
link. Plenty of bandwidth, but bad latency. The app takes *forever*
to come up. Tens of minutes. Which makes it kind of unusable. All
the little X transactions multiplied by the satellite delay is what
I think is causing the problem.
There is a possible solution not using VNC, which is to run Xnest, the
nested X server, at the same end as the application, and get it to
display on the X server at the user's end. That _might_ reduce the
latency problems. This is actually a very similar approach to running
Xvnc at the application end and displaying it on vncviewer.

Why is this better than using VNC? Well, it isn't. But it would be
best to try as many solutions as possible.

(BTW, I would guess that yes, VNC would help enormously.)

--
Ed Avis <http://members.tripod.com/~mave>
Quentin Stafford-Fraser
1998-05-21 18:03:15 UTC
Permalink
Cary,

I think you'll find that VNC starts up quite a bit faster than X on
high-latency links, because it involves much fewer round trips than X on
startup. But on slow links X is sometimes faster once it is running. This
has been our experience over ISDN.

If you want to do some tests, make sure you read the section in the FAQ
about speeding up VNC; the size and depth of the desktop will make a big
difference, as will the amount of detail displayed at the time of
connection. Window managers and desktop backgrounds which use dithered
areas extensively can be particularly slow. Lastly, if the user doesn't
mind, I recommend a click-to-focus policy if the window manager allows it.
Otherwise you can get substantial window decorations being redrawn simply by
accidentally moving the mouse over a window.

regards,
Quentin
------
Dr Quentin Stafford-Fraser
ORL - The Olivetti & Oracle Research Lab
http://www.orl.co.uk/~qsf

-----Original Message-----
From: Cary B. O'Brien <***@access.digex.net>
To: vnc-***@orl.co.uk <vnc-***@orl.co.uk>
Date: 21 May 1998 17:27
Subject: VNC vs X over slow links
Post by Cary B. O'Brien
Hi,
I would like to know about people's experiences running VNC over slow
links, esp in comparison to running X over the same link.
We have a customer that is trying to run an X app over a satellite[1]
link. Plenty of bandwidth, but bad latency. The app takes *forever*
to come up. Tens of minutes. Which makes it kind of unusable. All
the little X transactions multiplied by the satellite delay is what
I think is causing the problem.
It seems to me that the VNC protocol might be better in such a situation,
since it would group up the window paints into larger 'chunks'.
Has anyone had any experience in this kind of situation? Any results,
good or bad?
Thanks in advance,
Cary O'Brien
[1] I guess you have to explicitly say Geosyncronus now, with all the LEO
and MEO stuff around.
Steve Cheng
1998-05-21 21:02:23 UTC
Permalink
Post by Cary B. O'Brien
Hi,
I would like to know about people's experiences running VNC over slow
links, esp in comparison to running X over the same link.
I have used VNC over a 115200bps PPP link. I have never tried X; the old
machine can't take it. The environment is KDE with a not-too-complex
wallpaper. It's slow, but I could still do something useful with it. X would
probably not be significantly faster (and a lot more complex).

Sorry if this information isn't useful, but you can take it as a compliment
for VNC :)

// Steve
e-mail: ***@ggi-project.org
www: <http://shell.ipoline.com/~elmert/>
Oliver Hillmann
1998-05-22 01:38:01 UTC
Permalink
Post by Cary B. O'Brien
I would like to know about people's experiences running VNC over slow
links, esp in comparison to running X over the same link.
We have a customer that is trying to run an X app over a satellite[1]
link. Plenty of bandwidth, but bad latency. The app takes *forever*
to come up. Tens of minutes. Which makes it kind of unusable. All
the little X transactions multiplied by the satellite delay is what
I think is causing the problem.
Hi,

having access to a shell login on a SGI at University, I played around a
bit with X and VNC through a 33.6k modem line. I use to start an X
application on the remote computer, displaying it's windows and output on
my home Linux box X server. All X (and VNC) data has to go through this
rather slow modem line.

One day I tried to start netscape navigator for SGI on the remote box,
just to have a look at it. Using generic X, I had to wait several minutes
to see *anything* happen on my X screen, and some considerable more time
to see the full window with all buttons etc.

Trying the same using VNC (Xvnc on the SGI, vncviewer on Linux), it also
needed some big amount of time before I would see all details, but VNC
seems to be faster in two ways:

1. Though it might take some time to see the window completed, you happen
to see how it is created, and often you can use applications though you do
not see every detail. Like when using netscape, I can type a URL to go to
while other parts of the window still are in the process of being
displayed... This is not very comfortable, but as far as I remember, it is
less horrible than X...

2. When using different applications on your X desktop, it is common
practice to have different windows placed on top of others. Though I don't
know much internal stuff about the X protocol, it seems as if redrawing
windows with come back to the top again needs data transfer between the X
server and the client application. This would not only affect the speed of
creating new windows, but also the speed of redrawing existing windows.

Havind such a window in a vncviewer window, which again is amoung other
windows on the desktop, changes the situation. Overlaying the vncviewer
window now causes the vncviewer application to redraw the window,
including the window of the application using VNC. But instead of
communicating with the remote X server, vncviewer can reconstruct the
application window from data which has been transmitted before. Again:
Only the local vncviewer window has to be rebuilt from known (because
transmitted earlier) data, there has to be no traffic over a slow line.

Well, I hope that you got the point which I wanted to make... (And I hope,
that I did not talk rubbish...:) It's all from my experience and my point
of view, so just try playing around with VNC, it's quite easy to install
and handle, I think...

Good luck,

Oliver
Dr. Joel M. Hoffman
1998-05-22 01:46:25 UTC
Permalink
Post by Oliver Hillmann
1. Though it might take some time to see the window completed, you happen
to see how it is created, and often you can use applications though you do
not see every detail. Like when using netscape, I can type a URL to go to
while other parts of the window still are in the process of being
displayed... This is not very comfortable, but as far as I remember, it is
less horrible than X...
Hmm. Gives me an idea. What about sending out every other line (or
even every third line) as a solution to low-bandwith operations?
Something like interlaced GIF's?


-Joel
(***@exc.com)
Edward Avis
1998-05-22 10:11:23 UTC
Permalink
What about sending out every other line (or even every third line) as a
solution to low-bandwith operations? Something like interlaced GIF's?
Problem is, that would reduce the available compression, so time taken
overall would be more. In general, interlaced GIFs are bigger than
their non-interlaced equivalents.

There are all sorts of fun tricks you could also do - for example, send
the luminance first (to build up a grey-scale image), and then the
colour data. Or you could first send an image at quarter-resolution,
then half-resoultion, then finally full res. The trick is finding one
that isn't too complex (your suggestion is good by this measure), and
balancing immediate feedback against reducing the total amount of data
transmitted, bearing in mind that "plain" image data is easiest to
compress.

--
Ed Avis <http://members.tripod.com/~mave>
Alan Cox
1998-05-22 13:07:23 UTC
Permalink
Post by Edward Avis
There are all sorts of fun tricks you could also do - for example, send
the luminance first (to build up a grey-scale image), and then the
colour data. Or you could first send an image at quarter-resolution,
The video world tends to send YUV422 first (4bit luminance 2bits of U/V -ie
colour data). It works pretty well for an 'overview' image.

Alan
Dr. Joel M. Hoffman
1998-05-22 13:07:26 UTC
Permalink
Post by Edward Avis
What about sending out every other line (or even every third line) as a
solution to low-bandwith operations? Something like interlaced GIF's?
Problem is, that would reduce the available compression, so time taken
overall would be more. In general, interlaced GIFs are bigger than
their non-interlaced equivalents.
Why would it reduce the compression? Large solid areas would be still
be large solid areas, and areas with detail would be easier to
transmit. Most applications will work just fine even with only every
other line, and so often the second pass through the image will not be
needed at all.

-Joel
(***@exc.com)
Edward Avis
1998-05-22 13:27:25 UTC
Permalink
[about only transmitting every other line of pixels]
Post by Dr. Joel M. Hoffman
Why would it reduce the compression? Large solid areas would be still
be large solid areas, and areas with detail would be easier to
transmit. Most applications will work just fine even with only every
other line, and so often the second pass through the image will not be
needed at all.
If you do have a second pass, I'd expect that the compression would be
worse than one-pass transmission. Large solid areas would become two
slightly less large solid areas, for the first and second pass.

We could debate this on a theoretical basis all day, but I suggest you
get a screenshot of some application, make it into a non-interlaced GIF
file and an interlaced GIF file, and see which file is smaller. (Of
course, GIF uses a different encoding to VNC, but I'd expect similar
results with VNC.)

If it turns out the interlaced one is in fact smaller - well, I'll be
gutted, won't I? :-)

Your point that the second pass might not be needed is a good one, and
if most of the time it were not needed, the total data transmitted would
indeed be less.

--
Ed Avis <http://members.tripod.com/~mave>
James [Wez] Weatherall
1998-05-22 13:27:31 UTC
Permalink
Post by Dr. Joel M. Hoffman
Post by Edward Avis
What about sending out every other line (or even every third line) as a
solution to low-bandwith operations? Something like interlaced GIF's?
Problem is, that would reduce the available compression, so time taken
overall would be more. In general, interlaced GIFs are bigger than
their non-interlaced equivalents.
Why would it reduce the compression? Large solid areas would be still
be large solid areas, and areas with detail would be easier to
transmit. Most applications will work just fine even with only every
other line, and so often the second pass through the image will not be
needed at all.
If I understand you correctly then what you're saying is _not_ that the
update be split into two sets of alternating sub-rectangles but that the
encoder effectively splits the update area into two virtual rectangles,
composed of the alternate sets of lines and then encodes each individually
and sends them. The recieving end sees normal looking update data but has
to put the data it would normally write to subsequent lines on every other
line.

How you encode the data is independent, to some extent, of whether or not
you're sending the whole region or just alternate lines, if you take this
approach.

In terms of compression, this scheme will probably be slightly worse than
what we currently do but might give more favourable responsiveness if well
implemented, I think.

Cheers,

James "Wez" Weatherall
--
Olivetti & Oracle Research Laboratory, Cambridge, UK.
Tel : Work - 343000
Charles Hines
1998-05-22 14:48:52 UTC
Permalink
Since the bandwidth discussion has come up again, what happened with
the zlib compression experiments that Harco de Hilster did back in
March? By making some simple mods to the WriteExact & ReadExact he
claimed to have seen nice compression ratios:

Harco> As a test I ran x11perf, and compared the results with raw,
Harco> rre, core and hextile encodings.

Harco> x11perf -repeat 1 -time 1 -range rect1,bigsrect500

Harco> The achieved compression ratios, actual number of bytes send:

Harco> hextile 7.987284 268725
Harco> corre 9.261887 461645
Harco> rre 11.370915 538029
Harco> raw 112.878076 283053

Seems to me that adding this kind of compression makes sense and it
should augment any other kind of optimizations added later. And since
the zlib code is even part of the source distribution now you don't
even have to track it down to link it in.

Harco> The actual implementation will require an additional message
Harco> (compression yes/no, compression level) I think, and perhaps
Harco> some minor code rewriting to optimize the compression.

I spoke with him offline about this, and he said that he was working
on the more correct implementation, although I think that adding the
simple implementation as an undocumented compile time option wouldn't
be a bad idea.

Harco - how's it coming along? I know that working on code in one's
spare time can be slow going at times.

Chuck

--
*******************************************************************************
Charles K. Hines <***@vnet.ibm.com>
IBM Logic Synthesis Developer [BooleDozer (TM)]
Martial Arts Instructor [Modern Arnis and Balintawak Escrima]

"Go back to sleep, Chuck. You're just havin' a nightmare
-- of course, we ARE still in Hell." (Gary Larson)
*******************************************************************************
Harco de Hilster
1998-05-26 09:12:22 UTC
Permalink
Post by Charles Hines
Since the bandwidth discussion has come up again, what happened with
the zlib compression experiments that Harco de Hilster did back in
March? By making some simple mods to the WriteExact & ReadExact he
Hi Charles,

Sorry I didn't get back to you. It is still on my 'almost finished' list, but other work
is more urgent now.
I can send the code I have so far if you are willing to finish it up.

I got the impression that the orl people where not to keen on adding
regular compression and/or adding to the protocol though.

P.S. I am no longer on the list because of the huge number of messages.

Regards,

Harco


--
-----------------+----------------------+------------------------------
Harco de Hilster CAOS/CAMM Center Phone: +31(0)24-3653379
Research & University of Nijmegen Fax: +31(0)24-3652977
Development Toernooiveld 1 E-mail: ***@caos.kun.nl
System Management 6525 ED Nijmegen URL: http://www.caos.kun.nl
The Netherlands
-----------------+----------------------+------------------------------
Charles Hines
1998-05-26 14:30:59 UTC
Permalink
Post by Charles Hines
Since the bandwidth discussion has come up again, what happened with
the zlib compression experiments that Harco de Hilster did back in
March? By making some simple mods to the WriteExact & ReadExact he
Harco> Hi Charles,

Hi Harco.

Harco> Sorry I didn't get back to you.

No problem at all - I just didn't want your earlier efforts on this
compression work to be forgotten...

Harco> It is still on my 'almost finished' list, but other work is
Harco> more urgent now.

I completely understand this - I recently had to give up
maintaintership of the fvwm window manager because I didn't have the
time to properly devote to it.

Harco> I can send the code I have so far if you are willing to finish
Harco> it up.

I appreciate that, but as you can probably tell from my statement
above, I most likely wouldn't have the time to be able to finish it up
either. But perhaps someone else on the vnc list (or at orl) would be
willing to take up the torch?

Harco> I got the impression that the orl people where not to keen on adding
Harco> regular compression and/or adding to the protocol though.
Dr. Joel M. Hoffman
1998-05-22 13:33:02 UTC
Permalink
Post by Edward Avis
If you do have a second pass, I'd expect that the compression would be
worse than one-pass transmission. Large solid areas would become two
[...]
Your point that the second pass might not be needed is a good one, and
if most of the time it were not needed, the total data transmitted would
indeed be less.
I just ran a test. I captured the screen, and erased every other row.
The screen was still understandable. I think working with that kind
of a screen at double speed would be better than working with a full
screen at half speed.

-Joel
(***@exc.com)
Edward Avis
1998-05-22 13:39:53 UTC
Permalink
Post by Dr. Joel M. Hoffman
I just ran a test. I captured the screen, and erased every other row.
The screen was still understandable. I think working with that kind
of a screen at double speed would be better than working with a full
screen at half speed.
I think it would depend on what you were running - but for most
applications over slow links, erasing every other row would be a good
scheme. Effectively you are halving the vertical screen resolution, to
640x240 or whatever.

--
Ed Avis <http://members.tripod.com/~mave>
Dr. Joel M. Hoffman
1998-05-22 13:39:58 UTC
Permalink
Post by Edward Avis
Post by Dr. Joel M. Hoffman
I just ran a test. I captured the screen, and erased every other row.
The screen was still understandable. I think working with that kind
of a screen at double speed would be better than working with a full
screen at half speed.
I think it would depend on what you were running - but for most
applications over slow links, erasing every other row would be a good
scheme. Effectively you are halving the vertical screen resolution, to
640x240 or whatever.
And, for those applications where you need every line (graphics?
small text?) you could either (a) wait for the second pass; or (b)
turn off this encoding.

-Joel
(***@exc.com)
Dr. Joel M. Hoffman
1998-05-22 13:43:03 UTC
Permalink
Post by James [Wez] Weatherall
If I understand you correctly then what you're saying is _not_ that the
update be split into two sets of alternating sub-rectangles but that the
encoder effectively splits the update area into two virtual rectangles,
composed of the alternate sets of lines and then encodes each individually
and sends them. The recieving end sees normal looking update data but has
to put the data it would normally write to subsequent lines on every other
line.
Right.
Post by James [Wez] Weatherall
How you encode the data is independent, to some extent, of whether or not
you're sending the whole region or just alternate lines, if you take this
approach.
Right.
Post by James [Wez] Weatherall
In terms of compression, this scheme will probably be slightly worse than
what we currently do but might give more favourable responsiveness if well
implemented, I think.
That's the idea. I think for complex graphics areas, it will give
much better responsiveness.

-Joel
(***@exc.com)
Christian Brunschen
1998-05-22 14:15:22 UTC
Permalink
For a well-compressed way of sending rectangles, why don't you take a look
at the PNG (Portable Network Graphics) spec, which is a W3C (World-Wide
Web Consortium) recommendation? it supports interlacing in a much better
way than GIF, is not encumbered by the UNISYS LZW patent, it can filter
the pixels before compressing them ... there are free implementations of
the code you need to work with PNG:s (libpng and libz), too; and the
encoding is, of course, lossless - yet supports both colormapped data and
truecolor data with up to 16 bits per color component. Plus it includes
Alpha (transparency). So if you had a _very_ intelligent server, you could
transmit a PNG covering the entire rectangle within which the change
occurred, but only those pixels that had actually changed would be set -
the rest could be set to one and the same fully transparent value, which
should basically 'dissapear' in compression.

Best regards // Christian Brunschen
Nelson Minar
1998-05-22 17:19:45 UTC
Permalink
So if you had a _very_ intelligent server, you could transmit a PNG
covering the entire rectangle within which the change occurred, but
only those pixels that had actually changed would be set - the rest
could be set to one and the same fully transparent value, which
should basically 'dissapear' in compression.
That's a very elegant idea. Would it work?
Christian Brunschen
1998-05-23 19:28:41 UTC
Permalink
Post by Nelson Minar
So if you had a _very_ intelligent server, you could transmit a PNG
covering the entire rectangle within which the change occurred, but
only those pixels that had actually changed would be set - the rest
could be set to one and the same fully transparent value, which
should basically 'dissapear' in compression.
That's a very elegant idea. Would it work?
ยด
Of course it would :)

Actually, I wasn't thinking of putting together a full conformant PNG file
representing the rectangle in question; there would be much overhead, to
transmit information upon which the client and server have already agreed.

No, I was mainly thinking about being able to send Alpha information of
some sort, which would make the above possible; plus, using the PNG pixel
data filtering and compression routines to improve transmission time (ie,
transmit highly compressed image data).

Best regards

// Christian Brunschen
Quentin Stafford-Fraser
1998-05-25 10:39:41 UTC
Permalink
Post by Nelson Minar
So if you had a _very_ intelligent server, you could transmit a PNG
covering the entire rectangle within which the change occurred, but
only those pixels that had actually changed would be set - the rest
could be set to one and the same fully transparent value, which
should basically 'dissapear' in compression.
That's a very elegant idea. Would it work?
It is a nice idea, but remember that the alpha information takes some space.
Remember that most of VNC works on the efficient encoding of solid-colour
rectangles. If you start specifying which pixels of this white rectangle
were not white before, you won't gain anything. But yes, you could
potentially win with raw pixels, assuming that PNG does its job efficiently.

One thing we would need to know before incorporating any general compression
like zlib is that it was easily portable to a wide variety of platforms - I
seem to recall that somebody looked at zlib and thought the code was a bit
of a nightmare, and that a port to Java, for example, would not be fun.
It would also need to be GPLed.

As mentioned in the FAQ, one way to get compression is to use SSH, but this
is not free on the Windows platform, and I don't know if it exists for Java?

Quentin
------
Dr Quentin Stafford-Fraser
ORL - The Olivetti & Oracle Research Lab
http://www.orl.co.uk/~qsf
Christian Brunschen
1998-05-25 11:54:36 UTC
Permalink
Post by Quentin Stafford-Fraser
Post by Nelson Minar
So if you had a _very_ intelligent server, you could transmit a PNG
covering the entire rectangle within which the change occurred, but
only those pixels that had actually changed would be set - the rest
could be set to one and the same fully transparent value, which
should basically 'dissapear' in compression.
That's a very elegant idea. Would it work?
It is a nice idea, but remember that the alpha information takes some space.
Yes, but if you have an update that lies along a long diagonal line for
instance, then there will be _lots_ of unchanged space which could be sent
as fully-transparent pixels - and those would compress away very well.
Post by Quentin Stafford-Fraser
Remember that most of VNC works on the efficient encoding of solid-colour
rectangles. If you start specifying which pixels of this white rectangle
were not white before, you won't gain anything. But yes, you could
potentially win with raw pixels, assuming that PNG does its job efficiently.
I actually think that a PNG-encoded rectangle may well win even over
Hextile or CoRRE; but as usual, only experiments will show.
Post by Quentin Stafford-Fraser
One thing we would need to know before incorporating any general compression
like zlib is that it was easily portable to a wide variety of platforms - I
seem to recall that somebody looked at zlib and thought the code was a bit
of a nightmare, and that a port to Java, for example, would not be fun.
It would also need to be GPLed.
As mentioned in the FAQ, one way to get compression is to use SSH, but this
is not free on the Windows platform, and I don't know if it exists for Java?
My 'vision' for a PNG rect encoding is much like the following....

The PNG format is built of 'chunks'; some are mandatory, some are
optional. A coinforming PNG image specifies a number of standard chunks
that describe the image data, and one or more chunks that actually contain
the image data.
Now, the information about the image data format is already known to both
server and client, so we really don't want to retransmit that; likewise,
PNG chunks contain length and checksum information. The length is
obviously useful, but the checksum we might do without (since we are, per
definition, running over a reliable byte-stream transport).

So, a server might encode a rectangle like this:

The server uses a pre-existing PNG writing API (such as offered by
libpng), and creates the PNG file in question. It then extracts from the
file only that data which is not known on both sides already - ie, the
compressed, filtered pixel data - and sends that.

The client, upon reception, can reconstruct the necessary PNG headers,
build a fake PNG file, and hand that to the pre-existing PNG decoder (such
as offered by libpng, again :), and gets back a bunch of pixels, which it
composites over to the target display.

Of course, the actual generation/reconstruction of the filtered compressed
pixel data is not very difficult - so rolling your own there, rather than
using a PNG writing / reading library, is rather easy. So, if there is
'just' an enflate/deflate library available, sending the pixels in the
'PNG way' would be very simple.


Of course, I urge everyone to take a look at the actual PNG spec:

http://www.w3.org/TR/REC-png
Post by Quentin Stafford-Fraser
Quentin
------
Dr Quentin Stafford-Fraser
ORL - The Olivetti & Oracle Research Lab
http://www.orl.co.uk/~qsf
// Christian Brunschen
Nelson Minar
1998-05-25 14:35:13 UTC
Permalink
Just to be clear, there are two reasons that PNG might be interesting:

PNG compression is very good at the kinds of things VNC is for. It
might be a substantial win.

It's a standard format with reasonably good tools. There's a lot of
flexibility in PNG chunking, as well as different encoding options.
VNC could even develop extensions that other PNG programs could easily
ignore.

It's an interesting idea, but I guess unless someone spends the time
to implement it nothing will happen.
Christian Brunschen
1998-05-25 12:14:40 UTC
Permalink
Post by Quentin Stafford-Fraser
Post by Nelson Minar
So if you had a _very_ intelligent server, you could transmit a PNG
covering the entire rectangle within which the change occurred, but
only those pixels that had actually changed would be set - the rest
could be set to one and the same fully transparent value, which
should basically 'dissapear' in compression.
That's a very elegant idea. Would it work?
It is a nice idea, but remember that the alpha information takes some space.
Remember that most of VNC works on the efficient encoding of solid-colour
rectangles. If you start specifying which pixels of this white rectangle
were not white before, you won't gain anything. But yes, you could
potentially win with raw pixels, assuming that PNG does its job efficiently.
One thing we would need to know before incorporating any general compression
like zlib is that it was easily portable to a wide variety of platforms - I
seem to recall that somebody looked at zlib and thought the code was a bit
of a nightmare, and that a port to Java, for example, would not be fun.
Well, that point is actually moot, since the classes

java.util.zip.Deflater
and
java.util.zip.Inflater

handle both compression and decompression of the data in question just
fine; and they are standard in JDK1.1 from Javasoft.
Post by Quentin Stafford-Fraser
It would also need to be GPLed.
As long as the librar(y/ies) in question are available freely and with
source code, must it be GPL in particular ?
Post by Quentin Stafford-Fraser
As mentioned in the FAQ, one way to get compression is to use SSH, but this
is not free on the Windows platform, and I don't know if it exists for Java?
Quentin
------
Dr Quentin Stafford-Fraser
ORL - The Olivetti & Oracle Research Lab
http://www.orl.co.uk/~qsf
// Christian Brunschen
Christian Brunschen
1998-05-25 12:42:31 UTC
Permalink
Post by Christian Brunschen
Post by Quentin Stafford-Fraser
It would also need to be GPLed.
As long as the librar(y/ies) in question are available freely and with
source code, must it be GPL in particular ?
To be precise, the freely available licenses for deflate compression
(zlib) and png (libpng) have the following licences, respectively:

--- zlib ---

/* zlib.h -- interface of the 'zlib' general purpose compression library
version 1.0.4, Jul 24th, 1996.

Copyright (C) 1995-1996 Jean-loup Gailly and Mark Adler

This software is provided 'as-is', without any express or implied
warranty. In no event will the authors be held liable for any damages
arising from the use of this software.

Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:

1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software
in a product, an acknowledgment in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.

Jean-loup Gailly Mark Adler
***@prep.ai.mit.edu ***@alumni.caltech.edu

--- /zlib ---
(The zlib home page is at <http://www.cdrom.com/pub/infozip/zlib/> )

--- png ---

* COPYRIGHT NOTICE:
*
* The PNG Reference Library is supplied "AS IS". The Contributing Authors
* and Group 42, Inc. disclaim all warranties, expressed or implied,
* including, without limitation, the warranties of merchantability and of
* fitness for any purpose. The Contributing Authors and Group 42, Inc.
* assume no liability for direct, indirect, incidental, special, exemplary,
* or consequential damages, which may result from the use of the PNG
* Reference Library, even if advised of the possibility of such damage.
*
* Permission is hereby granted to use, copy, modify, and distribute this
* source code, or portions hereof, for any purpose, without fee, subject
* to the following restrictions:
* 1. The origin of this source code must not be misrepresented.
* 2. Altered versions must be plainly marked as such and must not be
* misrepresented as being the original source.
* 3. This Copyright notice may not be removed or altered from any source or
* altered source distribution.
*
* The Contributing Authors and Group 42, Inc. specifically permit, without
* fee, and encourage the use of this source code as a component to
* supporting the PNG file format in commercial products. If you use this
* source code in a product, acknowledgment is not required but would be
* appreciated.

--- /png---
(the png home page ist at <http://www.cdrom.com/pub/png/> )
Post by Christian Brunschen
Post by Quentin Stafford-Fraser
As mentioned in the FAQ, one way to get compression is to use SSH, but this
is not free on the Windows platform, and I don't know if it exists for Java?
Quentin
------
Dr Quentin Stafford-Fraser
ORL - The Olivetti & Oracle Research Lab
http://www.orl.co.uk/~qsf
// Christian Brunschen
// Christian Brunschen, again
Edward Avis
1998-05-26 10:15:58 UTC
Permalink
Post by Quentin Stafford-Fraser
One thing we would need to know before incorporating any general
compression like zlib is that it was easily portable to a wide variety
of platforms - I seem to recall that somebody looked at zlib and
thought the code was a bit of a nightmare, and that a port to Java, for
example, would not be fun. It would also need to be GPLed.
IIRC:

A while ago somebody suggested zlib; then I suggested an alternative,
LZO (<http://wildsau.idv.uni-linz.ac.at/mfx/lzo.html>). The problem is
that LZO was described as "assembler dressed up to look like C", and
would not be easily portable to Java. Zlib on the other hand is
included as a native method in Java 1.1, so it would be very fast (for
Java).
Post by Quentin Stafford-Fraser
As mentioned in the FAQ, one way to get compression is to use SSH, but
this is not free on the Windows platform, and I don't know if it exists
for Java?
Sergey Okhapin has ported ssh to Cygwin32, see
<http://wildsau.idv.uni-linz.ac.at/mfx/lzo.html>. I don't think there
is a Java version however.

--
Ed Avis <http://members.tripod.com/~mave>
Tristan Richardson
1998-06-02 18:09:59 UTC
Permalink
Post by Harco de Hilster
I got the impression that the orl people where not to keen on adding
regular compression and/or adding to the protocol though.
On the contrary, this is something we'd definitely look at if we had time. It
would mean a new version of the protocol, but it could even be done without
sacrificing backwards-compatibility with older viewers and servers.
Loading...