gzip (GNU zip) is a compression utility designed to be a replacement for 'compress'. Its main advantages over compress are much better compression and freedom from patented algorithms. The GNU Project uses it as the standard compression program for its system.
gzip currently uses by default the LZ77 algorithm used in zip 1.9 (the portable pkzip compatible archiver). The gzip format was however designed to accommodate several compression algorithms. See below for a comparison of zip and gzip.
gunzip can currently decompress files created by gzip, compress or pack. The detection of the input format is automatic. For the gzip format, gunzip checks a 32 bit CRC. For pack, gunzip checks the uncompressed length. The 'compress' format was not designed to allow consistency checks. However gunzip is sometimes able to detect a bad .Z file because there is some redundancy in the .Z compression format. If you get an error when uncompressing a .Z file, do not assume that the .Z file is correct simply because the standard uncompress does not complain. This generally means that the standard uncompress does not check its input, and happily generates garbage output.
gzip produces files with a .gz extension. Previous versions of gzip used the .z extension, which was already used by the 'pack' Huffman encoder. gunzip is able to decompress .z files (packed or gzip'ed).
Several planned features are not yet supported (see the file TODO). See the file NEWS for a summary of changes since 0.5. See the file INSTALL for installation instructions. Some answers to frequently asked questions are given in the file INSTALL, please read it. (In particular, please don't ask me once more for an /etc/magic entry.)
WARNING: on several systems, compiler bugs cause gzip to fail, in particular when optimization options are on. See the section "Special targets" at the end of the INSTALL file for a list of known problems. For all machines, use "make check" to check that gzip was compiled correctly. Try compiling gzip without any optimization if you have a problem.
Please send all comments and bug reports by electronic mail to: Jean-loup Gailly <jloup@chorus.fr>
or, if this fails, to bug-gnu-utils@prep.ai.mit.edu. Bug reports should ideally include:
If you send me patches for machines I don't have access to, please test them very carefully. gzip is used for backups, it must be extremely reliable.
The package crypt++.el is highly recommended to manipulate gzip'ed file from emacs. It recognizes automatically encrypted and compressed files when they are first visited or written. It is available via anonymous ftp to roebling.poly.edu [128.238.5.31] in /pub/crypt++.el. The same directory contains also patches to dired, ange-ftp and info. GNU tar 1.11.2 has a -z option to invoke directly gzip, so you don't have to patch it. The package ftp.uu.net:/languages/emacs-lisp/misc/jka-compr19.el.Z also supports gzip'ed files.
The znew and gzexe shell scripts provided with gzip benefit from (but do not require) the cpmod utility to transfer file attributes. It is available by anonymous ftp on gatekeeper.dec.com in /.0/usenet/comp.sources.unix/volume11/cpmod.Z.
The sample programs zread.c, sub.c and add.c in subdirectory sample are provided as examples of useful complements to gzip. Read the comments inside each source file. The perl script ztouch is also provided as example (not installed by default since it relies on perl).
gzip is free software, you can redistribute it and/or modify it under the terms of the GNU General Public License, a copy of which is provided under the name COPYING. The latest version of gzip are always available by ftp in prep.ai.mit.edu:/pub/gnu, or in any of the prep mirror sites:
A VMS executable is available in ftp.spc.edu:[.macro32.savesets]gzip-1-*.zip (use [.macro32]unzip.exe to extract). A PRIMOS executable is available in ftp.lysator.liu.se:/pub/primos/run/gzip.run. OS/2 executables (16 and 32 bits versions) are available in ftp.tu-muenchen.de:/pub/comp/os/os2/archiver/gz*-[16,32].zip
Some ftp servers can automatically make a tar.Z from a tar file. If you are getting gzip for the first time, you can ask for a tar.Z file instead of the much larger tar file.
Many thanks to those who provided me with bug reports and feedback. See the files THANKS and ChangeLog for more details.
Note about zip vs. gzip:
The name 'gzip' was a very unfortunate choice, because zip and gzip are two really different programs, although the actual compression and decompression sources were written by the same persons. A different name should have been used for gzip, but it is too late to change now.
zip is an archiver: it compresses several files into a single archive file. gzip is a simple compressor: each file is compressed separately. Both share the same compression and decompression code for the 'deflate' method. unzip can also decompress old zip archives (implode, shrink and reduce methods). gunzip can also decompress files created by compress and pack. zip 1.9 and gzip do not support compression methods other than deflation. (zip 1.0 supports shrink and implode). Better compression methods may be added in future versions of gzip. zip will always stick to absolute compatibility with pkzip, it is thus constrained by PKWare, which is a commercial company. The gzip header format is deliberately different from that of pkzip to avoid such a constraint.
On Unix, gzip is mostly useful in combination with tar. GNU tar 1.11.2 has a -z option to invoke gzip automatically. "tar -z" compresses better than zip, since gzip can then take advantage of redundancy between distinct files. The drawback is that you must scan the whole tar.gz file in order to extract a single file near the end; unzip can directly seek to the end of the zip file. There is no overhead when you extract the whole archive anyway. If a member of a .zip archive is damaged, other files can still be recovered. If a .tar.gz file is damaged, files beyond the failure point cannot be recovered. (Future versions of gzip will have error recovery features.)
gzip and gunzip are distributed as a single program. zip and unzip are, for historical reasons, two separate programs, although the authors of these two programs work closely together in the info-zip team. zip and unzip are not associated with the GNU project. The sources are available by ftp in
For general building and installation instructions, see the file INSTALL. If you need to build GNU Make and have no other make program to use, you can use the shell script build.sh instead. To do this, first run configure as described in INSTALL. Then, instead of typing make to build the program, type sh build.sh. This should compile the program in the current directory. Then you will have a Make program that you can use for make install, or whatever else.
It has been reported that the XLC 1.2 compiler on AIX 3.2 is buggy such that if you compile make with cc -O on AIX 3.2, it will not work correctly. It is said that using cc without -O does work.
One area that is often a problem in configuration and porting is the code to check the system's current load average. To make it easier to test and debug this code, you can do make check-loadavg to see if it works properly on your system. (You must run configure beforehand, but you need not build Make itself to run this test.)
See the file NEWS for what has changed since previous releases.
GNU Make is fully documented in make.texinfo. See the section entitled "Problems and Bugs" for information on submitting bug reports.
GNU Make is free software. See the file COPYING for copying conditions.
This pax file contains the GNU rcs commands, version 5.7 These commands are included in directory "programs":
The man pages are included in directory "manpages". The ported source code is included in directory "src".
To get started using rcs, just create a directory named RCS in the directory containing the parts you want to put into RCS. Once the RCS directory is create, all the rcs commands will work normally. You can read the rcsintro man page to get an introduction to the rcs commands.
The only known restriction is that these rcs commands cannot handle binary files. This is a deficiency in the OE diff command and not in any of the rcs commands themselves, but the rcs commands use the diff command.
This directory contains the GNU diff, diff3, sdiff, and cmp utilities. Their features are a superset of the Unix features and they are significantly faster. cmp has been moved here from the GNU textutils.
Report bugs to bug-gnu-utils@prep.ai.mit.edu
I finally got tired of all of these wild cron programs that take the task of running timed jobs to ridiculous extremes in terms of capabilities and unnecessary features. So here is my entry: a crond/crontab combination that is simple and elegant, and hopefully secure to boot. This cron implements reasonable features in terms of field specification in the crontab and allows individual user crontabs.
This program is written entirely from scratch by yours Truely (sig at bottom).
All jobs are run with /bin/sh for conformity and portability, thereby avoiding the mess that occurs with other cron's that try to use the user's perfered shell, which breaks down for special users and even makes some of us normal users unhappy (for example, /bin/csh does not use a true O_APPEND mode and has difficulty redirecting stdout and stderr both to different places!). You can, of course, run shell scripts in whatever language you like by making them executable with #!/bin/csh or whatever as the first line. If you don't like the extra processes, just 'exec' them.
Under the same reasoning, this cron does not allow you to specify environment variables or other stuff better left specified as arguments to a shell script. Talk about nonsense!
The programs were written with an eye towards security, hopefully I haven't forgotton anything. The programs were also written with an eye towards nice, clean, algorithmically sound code. It's small, and the only fancy code is that which deals with child processes. I do not try to optimize with vfork() since it causes headaches and is rather pointless considering I'm execing a shell most of the time, and I pay close attention to leaving descriptors open in the crond and close attention to preventing crond from running away.
This is an ANSI program, you must use a compiler that understands prototypes, such as GCC. I will not accept bug reports related to hacking the program to work with a non-ANSI compiler.
Note that the source code, especially in regard to changing the effective user, is Linux specific (SysVish). I welcome any changes in regard to making the mechanism work with other platforms.
Permissions should be as outlined below. You will want to create a special 'cron' group in which you put those users that are allowed to use the crontab program.
-rwx------ 1 root wheel 24864 Apr 27 09:02 /usr/bin/crond* -rwsr-x--- 1 root cron 24311 Apr 27 09:02 /usr/bin/crontab*crond should be run automatically at system startup from /etc/rc.local (or equivalent). It automatically detaches. A log level of 8 is normally specified, and you normally append using /bin/sh's >>, allowing the log file to backed up and cleared with an 'echo >/var/log/cron' in your cron scripts.
/usr/bin/crond -l8 >>/var/log/cron 2>&1The crontab files are normally located in /var/spool/cron/crontabs. The directories normally have permissions:
drwxr-x--- 3 root wheel 1024 Feb 24 18:17 /var/spool/cron/ drwxr-x--- 2 root wheel 1024 May 1 10:28 /var/spool/cron/crontabs
Use the crontab program to create a personal crontab with the following two lines:
* * * * * date >>/tmp/test * * * * * date
Check the log output of crond to ensure the cron entries are being run once a minute, check /tmp/test to ensure the date is being appended to it once a minute, and check your mail to ensure that crond is mailing you the date from the other entry once a minute.
After you are through testing cron, delete the entries with crontab -e or crontab -d
Send any bug reports and source code changes to me, Matthew Dillon:
dillon@apollo.backplane.com
Note carefully that I will not accept any local ANSI prototypes for system calls that should properly be in an external include file, that I will probably not accept additional features to the program, and I will not accept any changes to make the source compile under a non-ANSI compiler. I will not accept any radical code changes... the purpose being that I want this cron to be made bug free rather then feature full.
Changes to overridable defaults in defs.h should be made in the Makefile, you may submit a Makefile for your platform. Changes to the #include's in defs.h should be made by a combination of an -D options in the Makefile and #ifdef's for that option in defs.h, and not rely on pre-definitions made by the C compiler.
Changes to source code to accomodate one platform or other should be made in the same manner.
Matthew Dillon - dillon@apollo.backplane.com [always include a portion of the original email in any response!]
GNU dbm is a set of database routines that use extendible hashing and works similar to the standard UNIX dbm routines.
This is release 1.7.3 of GNU dbm.
To compile gdbm:
To compile the optional test and conversion programs:
To install the basic package:
To install the optional dbm and ndbm compatibility headers:
(You might want to "gnumake -n install" to make sure it will put things were you want them.)
Please report bugs to
bug-gnu-utils@prep.ai.mit.edu
The author of GNU dbm may be reached via e-mail to <phil@cs.wwu.edu>, and the current maintainer may be reached at <downsj@csos.orst.edu>. E-mail may be sent to either, or both, of these people.
Future versions of GDBM may be far, far more UNIX dependent than the library is, currently. If you are/have ported GDBM to non-UNIX like operating systems, please send e-mail to <downsj@csos.orst.edu>. Please include information about your port, including the type of operating system, your reasons for doing the port, and what changes you have made.
During the course of porting many packages there have been several occasions where the package requires an API not provided by VM/OE. Sometimes these routines are non-standard or are at a level greater than that provided by VM/ESA. This library of routines are some of the most common ones I've encountered. Since this package was created subsequent release of VM/ESA OpenEdition have incorporated many of these routines in the base (especially the work done in VM/ESA 2.3.0). Thus some of these routines are redundant (e.g. getopt()).
The makefile will now build a XPG4 TXTLIB file that can be included in your GLOBAL TXTLIB list. This will allow the linker to find routines that are defined as "library routines" (e.g. truncate).
Many, if not most, serious applications use a facility known as syslog to write and distribute messages. The actual logging of these messages can occur on the local system, a remote system, be sent to specific files, be sent to the system operator, or sent to other users. In most of the following applications I describe in this paper you will see excerpts from the system log created by these applications.
syslog allows you to consolidate your consoles just as PROP does under VM. At TAB we have two AIX-based firewalls which produce copious amounts of log messages tracking the access and usage of our Internet connection. Initially, each of these systems created their own logs. With syslog these messages could be consolidated on, at first, one of the firewalls; and, following the port of syslogd on to our VM/ESA system.
The ability of syslog to route messages of various types then allows us to distribute specific message types to people responsible for a given application/function. The message traffic can also be directed to the VM system operator for integration with PROP. We currently direct our SYSLOG output to NetMaster running under VM where it is centrally logged and have the ability to generate actions based on the message content.
It is also possible to provide the reverse of this process. That is, VM messages could be sent to a remote *NIX system for consolidation and action. This would involve providing PROP (or the NetView *NCCF service) with routines which used the syslog service.
Copyright (c) 1992-1996 Regents of the University of Michigan.
All rights reserved.
Redistribution and use in source and binary forms are permitted provided that this notice is preserved and that due credit is given to the University of Michigan at Ann Arbor. The name of the University may not be used to endorse or promote products derived from this software without specific prior written permission. This software is provided "as is" without express or implied warranty.
I've only exercised the slapd daemon and the database tools. The other stuff has compiled and linked without errors. I've updated all fork/spawn occurrences, POSIX thread implementation, and ASCII/EBCDIC character set dependencies.
This package requires gdbm and syslogd.
The University of Michigan is pleased to announce release 3.3 of UM-LDAP, an implementation of the Lightweight Directory Access Protocol. LDAP is a draft Internet standard directory service protocol that runs over TCP/IP. It can be used to provide a stand-alone directory service, or to provide lightweight access to the X.500 directory. LDAP is defined by RFC 1777 and RFC 1778.
This release includes the following components:
In addition, there are some contributed components:
Changes since release 3.2 of LDAP include
See the CHANGES file in the distribution for more details.
This software is freely available to anyone for any lawful purpose, subject to the U-M copyright notice and disclaimer. The software is available for anonymous ftp from the following location:
ftp://terminator.rs.itd.umich.edu/ldap/ldap-3.3.tar.Z
The software is provided as is without any express or implied warranty, but there is a bug reporting mail address which is responded to on a best-effort basis:
ldap-support@umich.edu
In addition, there is a discussion list for issues relating to this implementation of ldap:
ldap@umich.edu -- discussion list ldap-request@umich.edu -- to join the list
Comments or questions about the LDAP protocol in general should be sent to the IETF ASID discussion group:
ietf-asid@umich.edu -- discussion list ietf-asid-request@umich.edu -- to join the list
An LDAP home page containing lots of interesting information and online documentation is available at this URL:
http://www.umich.edu/~rsug/ldap/
This release has been ported to many UNIX platforms, including SunOS 4.1.x, Solaris 2.x, Ultrix 4.3, HP-UX 9.05, AIX 3.2.5, SCO, FreeBSD, NetBSD, LINUX, IRIX, Digital Unix (OSF/1), and NeXTSTEP 3.2. This release has also been ported to VMS.
The client libraries and some clients have also been ported to MacOS 7.x, MSDOS (some TCP stacks), and MS Windows 3.1/95/NT.
This is the UM-LDAP version 3.3 distribution. For a description of what this distribution contains, see the ANNOUNCEMENT file in this directory. For a description of changes from previous releases, see the CHANGES file in this directory. For a more detailed description of how to make and install the distribution, see the INSTALL file in this directory. For more information on making and installing slapd, see the "SLAPD and SLURPD Administrator's Guide" in the doc/guides/ directory.
You should be able to make and install the distribution with a pretty standard default configuration by typing the following commands
% make % su # make install
in this directory. This should produce something that basically works.
You will probably want to do a little configuration to suit your site, though. There are two files you might want to edit:
See the INSTALL file in this directory for more information.
There are man pages for most programs in the distribution and routines in the various libraries. See ldap(3) for details.
There is a postscript version of an administrator's guide for slapd in doc/guides/slapd.ps.
There is an LDAP homepage available that contains the latest LDAP news, releases announcements, pointers to other LDAP resources, etc. You can access it at this URL:
http://www.umich.edu/~rsug/ldap/
We would appreciate any feedback you can provide. If you have problems, report them to this address:
ldap-support@umich.edu
This Package has been compiled and linked. So, unless your processor doesn't support the string hardware assists, you won't need to re-build. The Makefile is already configured for VM/ESA 2.3.0. If you are on an earlier version then locate the LIBS= line and comment it out and uncomment the line above. This will pick up the gettimeofday() routine from my XPG4 library (which is available at this site).
IRC stands for "Internet Relay Chat". It was originally written by Jarkko Oikarinen (jto@tolsun.oulu.fi) in 1988. Since starting in Finland, it has been used in over 60 countries around the world. It was designed as a replacement for the "talk" program but has become much much more than that. IRC is a multi-user chat system, where people convene on "channels" (a virtual place, usually with a topic of conversation) to talk in groups, or privately. IRC is constantly evolving, so the way things to work one week may not be the way they work the next. Read the MOTD (message of the day) every time you use IRC to keep up on any new happenings or server updates.
IRC gained international fame during the 1991 Persian Gulf War, where updates from around the world came accross the wire, and most irc users who were online at the time gathered on a single channel to hear these reports. IRC had similar uses during the coup against Boris Yeltsin in September 1993, where IRC users from Moscow were giving live reports about the unstable situation there.
The user runs a "client" program (usually called 'irc') which connects to the IRC network via another program called a "server". Servers exist to pass messages from user to user over the IRC network.
First, check to see if irc is installed on your system. Type "irc" from your prompt. If this doesn't work, ask your local systems people if irc is already installed. This will save you the work of installing it yourself.
If an IRC client isn't already on your system, you either compile the source yourself, have someone else on your machine compile the source for you, or use the TELNET client. "telnet ircclient.itc.univie.ac.at 6668". Please only use the latter when you have no other way of reaching IRC, as this resource is quite limited, slow, and *very* unreliable.
You can anonymous ftp to any of the following sites (use the one closest to you): *** If you don't know what anonymous ftp is, ask your local systems people to show you *** UNIX client-> cs.bu.edu /irc/clients
ftp.acsu.buffalo.edu /pub/irc ftp.funet.fi /pub/unix/irc coombs.anu.edu.au /pub/irc(NB. if there is something related to IRC and it can't be found under coombs.anu.edu.au:/pub/irc then it isn't worth having). ftp.informatik.tu-muenchen.de /pub/comp/networking/irc/clients slopoke.mlb.semi.harris.com /pub/irc there is also a client avaliable with the server code. EMACS elisp-> cs.bu.edu /irc/clients/elisp ftp.funet.fi /pub/unix/irc/Emacs ftp.informatik.tu-muenchen.de /pub/comp/networking/irc/clients slopoke.mlb.semi.harris.com /pub/irc/emacs cs.hut.fi /pub/irchat X11 client-> catless.ncl.ac.uk /pub harbor.ecn.purdue.edu /pub/tcl/code VMS -> cs.bu.edu /irc/clients/vms coombs.anu.edu.au /pub/irc/vms ftp.funet.fi /pub/unix/irc/vms ftp.informatik.tu-muenchen.de /pub/net/irc REXX client for VM-> cs.bu.edu /irc/clients/rxirc ftp.informatik.uni-oldenburg.de /pub/irc/rxirc ftp.informatik.tu-muenchen.de /pub/net/irc/VM coombs.anu.edu.au /pub/irc/rxirc ftp.funet.fi /pub/unix/irc/rxirc MSDOS-> cs.bu.edu /irc/clients/msdos ftp.funet.fi /pub/unix/irc/msdos Macintosh-> cs.bu.edu /irc/clients/macintosh sumex-aim.stanford.edu /info-mac/comm ftp.funet.fi /pub/unix/irc/mac ftp.ira.uka.de /pub/systems/mac
It's usually best to try and connect to one geographically close, even though that may not be the best. You can always ask when you get on IRC. Here's a list of servers avaliable for connection: USA:
cs-pub.bu.edu irc.colorado.edu irc-2.mit.eduThis is, by no means, a comprehensive list, but merely a start. Connect to the closest of these servers and join the channel #Twilight_Zone When you get there, immediately ask what you want. Don't say "I have a question" because then hardly anyone will talk.Canada: ug.cs.dal.ca
Europe: irc.funet.fi cismhp.univ-lyon1.fr disuns2.epfl.ch irc.nada.kth.se sokrates.informatik.uni-kl.de bim.itc.univie.ac.at
Australia: jello.qabc.uq.oz.au
It's probably best to take a look around and see what you want to do first. All IRC commands start with a "/", and most are one word. Typing /help will get you help information. /names will get you a list of names, etc.
The output of /names is typically something like this->
Pub: #hack zorgo eiji Patrick fup htoaster Pub: #Nippon @jircc @miyu_d Pub: #nicole MountainD Pub: #hottub omar liron beer Deadog moh pfloyd Dode greywolf SAMANTHA(Note there are LOTS more channels than this, this is just sample output -- one way to stop /names from being too large is doing /names -min 20 which will only list channels with 20 or more people on it, but you can only do this with the ircII client).
"Pub" means public (or "visible") channel. "hack" is the channel name.
"#" is the prefix. A "@" before someone's nickname indicates he/she is the "Channel operator" (see #7) of that channel. A Channel Operator is someone who has control over a specific channel. It can be shared or not as the first Channel Operator sees fit. The first person to join the channel automatically receives Channel Operator status, and can share it with anyone he/she chooses (or not). Another thing you might see is "Prv" which means private. You will only see this if you are on that private channel. No one can see Private channels except those who are on that particular private channel.
A channel operator is someone with a "@" by their nickname in a /names list, or a "@" by the channel name in /whois output. Channel operators are kings/queens of their channel. This means they can kick you out of their channel for no reason. If you don't like this, you can start your own channel and become a channel operator there.
An IRC operator is someone who maintains the IRC network. They cannot fix channel problems. They cannot kick someone out of a channel for you. They cannot /kill (kick someone out of IRC temporarily) someone just because you gave the offender channel operator privileges and said offender kicked *you* off.
"bot" is short for "robot". It is a script run from an ircII client or a separate program (in perl, C, and sometimes more obscure languages). StarOwl@uiuc.edu (Michael Adams) defined bots very well: "A bot is a vile creation of /lusers to make up for lack of penis length". IRC bots are generally not needed. See (10) below about "ownership" of nicknames and channels.
It should be noted that many servers (especially in the USA) have started to ban ALL bots. Some ban bots so much that if you run a bot on their server, you will be banned from using that server (see segment below on K: lines).
#hottub and #initgame are almost always teeming with people. #hottub is meant to simulate a hot tub, and #initgame is a non-stop game of "inits" (initials). Just join and find out!
To get a list of channels with their names and topics, do /list -min 20 (on ircII) which will show you channels with 20 or more members. You can also do this for smaller numbers.
Many IRC operators are in #Twilight_Zone ... so if you join that channel be prepared for a lot of senseless dribble, more like what you find on the other channels listed above (#hottub). What was once a place of people who could help you has turned into just another place for those who have nothing better to do with themselves than just be there. If you find other documents saying go there to ask questions, ignore them. They should be considered to be out of date.
There are not enough nicknames to have nickname ownership. If someone takes your nickname while you are not on IRC, you can ask for them to give it back, but you can not *demand* it, nor will IRC operators /kill for nickname ownership. If you goto #Twilight_zone, you will find a bunch of people who will refuse to do this for you, yet they will do it for themselves or their friends or use /kill for even less reasonable uses.
There are, literally, millions of possible channel names, so if someone is on your usual channel, just go to another. You can /msg them and ask for them to leave, but you can't *force* them to leave.
Channel operators are the owner(s) of their respective channels. Keep this in mind when giving out channel operator powers (make sure to give them to enough people so that all of the channel operators don't unexpectedly leave and the channel is stuck without a channel operator).
On the other hand, do not give out channel operator to *everyone*. This causes the possibility of mass-kicking, where the channel would be stuck without any channel operators.
You have one option. You can ask everyone to leave and rejoin the channel. This is a good way to get channel operator back. It doesn't work on large channels or ones with bots, for obvious reasons.
Never type anything anyone tells you to without knowing what it is. There is a problem with typing certain commands with the ircII client that give anyone immediate control of your client (and thus can gain access to your account).
On IRC, you cannot be banned from every single server. Server-banning exists only on a per-server basis (being banned on one server does not mean you are automatically banned from another). "Ghosts are not allowed on IRC" means that you are banned from using that server. The banning is in one of three forms:
The most general answer is "use another server", but if it bothers you, try writing to the irc administrator of that site --> /admin server.name.here -- plead your case. It might even get somewhere!
GIF archives of IRC people are available:
ftp.funet.fi:/pub/pics/people/misc/irc (NORDUnet only) ftp.informatik.tu-muenchen.de /pub/comp/networking/irc/RP
The best, basic, IRC user's manual is the IRC Primer, available in plain text, PostScript, and LaTeX from cs-pub.bu.edu:/irc/support ... Another good place to start might be downloading the IRC tutorials. They're avaliable via anonymous ftp from cs-pub.bu.edu in /irc/support/tutorial.*
You can also join various IRC related mailing lists:
Note: These are not "Help me, where can I get started?" lists. For that information, read the IRCprimer noted above.
Those looking for more technical information can get the IRC RFC (rfc1459) available at all RFC ftp sites, as well as cs-pub.bu.edu:/irc/support/rfc1459.txt
email hrose@kei.com or avalon@coombs.anu.edu.au
Note: This implementation of INETD uses a takesocket/getsocket mechanism to pass socket file descriptors to the new process. This will be fixed as soon as VM's version of spawn supports the passing of file descriptors. This means, that for the time being, udp file descriptors can be passed by any mechanism.
This program invokes all internet services as needed. Connection-oriented services are invoked each time a connection is made by creating a process (in this or another virtual machine). This process is passed the connection as file descriptor 0 and is expected to do a getpeername to find out the source host and port.
Datagram-oriented services are invoked when a datagram arrives; a process is created & passed a pending message on file descriptor 0. Datagram servers may either connect to their peer, freeing up the original socket for inetd to receive further messages on, or "take over the socket", processing all all arriving datagrams and, eventually, timing out. The first type of server is said to be "multi-threaded"; the second type of server "single-threaded".
INETD uses a configuration file which is read at startup and, possibly, at some later time in response to a hangup signal. The configuration file is "free format" with fields given in the order shown below. Continuation lines for an entry must being with a space or tab. All fields must be present in each entry.
service name must be in /etc/services socket type stream/dgram protocol must be in /etc/protocols wait/nowait single-/multi-threaded user user to run daemon as server program full path name server program arguments maximum of MAXARGS (20)
Comment lines are indicated by a "#" in column 1.
This is version 1.9.18 of Samba, the free SMB and CIFS client and server for unix and other operating systems. Samba is maintained by the Samba Team, who support the original author, Andrew Tridgell.
Please read THE WHOLE of this file as it gives important information about the configuration and use of Samba.
This software is freely distributable under the GNU public license, a copy of which you should have received with this software (in a file called COPYING).
This is a big question.
The very short answer is that it is the protocol by which a lot of PC-related machines share files and printers and other informatiuon such as lists of available files and printers. Operating systems that support this natively include Windows NT, OS/2, and Linux and add on packages that achieve the same thing are available for DOS, Windows, VMS, Unix of all kinds, MVS, and more. Apple Macs and some Web Browsers can speak this protocol as well. Alternatives to SMB include Netware, NFS, Appletalk, Banyan Vines, Decnet etc; many of these have advantages but none are both public specifications and widely implemented in desktop machines by default.
The Common Internet Filesystem is what the new SMB initiative is called. For details watch CIFS.
Here is a very short list of what samba includes, and what it does. For many networks this can be simply summarised by "Samba provides a complete replacement for Windows NT, Warp, NFS or Netware servers."
For a much better overview have a look at the web site and browse the user survey.
If you want to contribute to the development of the software then please join the mailing list. The Samba team accepts patches (preferably in "diff -u" format, see docs/BUGS.txt for more details) and are always glad to receive feedback or suggestions to the address samba-bugs@samba.anu.edu.au. We have recently put a new bug tracking system into place which should help the throughput quite a lot. You can also get the Samba sourcecode straight from the CVS tree - see CVS.
You could also send hardware/software/money/jewelry or pizza vouchers directly to Andrew. The pizza vouchers would be especially welcome, in fact there is a special field in the survey for people who have paid up their pizza :-)
If you like a particular feature then look through the CVS change-log and see who added it, then send them an email.
Remember that free software of this kind lives or dies by the response we get. If noone tells us they like it then we'll probably move onto something else. However, as you can see from the user survey quite a lot of people do seem to like it at the moment :-)
Andrew Tridgell Email: samba-bugs@samba.anu.edu.au3 Ballow Crescent Macgregor, A.C.T. 2615 Australia
There is quite a bit of documentation included with the package, including man pages, and lots of .txt files with hints and useful info. This is also available from the web page. There is a growing collection of information under docs/faq; by the next release expect this to be the default starting point.
A list of Samba documentation in languages other than English is available on the web page.
If you would like to help with the documentation (and we _need_ help!) then have a look at the mailing list samba-docs, archived at archives.
Please use a mirror site! The list of mirrors is in docs/MIRRORS.txt. The master ftp site is samba.anu.edu.au in the directory pub/samba.
There is a mailing list for discussion of Samba. To subscribe send mail to listproc@samba.anu.edu.au with a body of "subscribe samba Your Name"
Please do NOT send this request to the list alias instead.
To send mail to everyone on the list mail to samba@listproc.anu.edu.au
There is also an announcement mailing list where new versions are announced. To subscribe send mail to listproc@samba.anu.edu.au with a body of "subscribe samba-announce Your Name". All announcements also go to the samba list.
You might also like to look at the usenet news group comp.protocols.smb as it often contains lots of useful info and is frequented by lots of Samba users. The newsgroup was initially setup by people on the Samba mailing list. It is not, however, exclusive to Samba, it is a forum for discussing the SMB protocol (which Samba implements). The samba list is gatewayed to this newsgroup.
A Samba WWW site has been setup with lots of useful info. Connect to:
As well as general information and documentation, this also has searchable archives of the mailing list and a user survey that shows who else is using this package. Have you registered with the survey yet? :-)
It is maintained by Paul Blackman (thanks Paul!). You can contact him at ictinus@samba.anu.edu.au.
This is a fully ported perl for VM/ESA Version 2.3. It relies on many of the new APIs found in this release.
This distribution is pre-built for hardware supporting the string assist functions. I would advise against trying to build this package from scratch as there are some inconsistencies with things like the c89 command and shell escape sequences which make the build process more complex than is desirable.
If you do take the hard way and do the rebuild you will also need my XPG4 distribution (libxpg4) for APIs not supported under OE (yet). (See "LIBXPG4".)
There is a "hints" file for vmesa that specifies the correct values for most things.
If you've downloaded the binary distribution, it needs to be installed below /usr/local. Don't worry about renaming files, that's for source distributions. You do, however, need to worry about the networking configuration files discussed in the last bullet below.
Some things to watch out for are
RFC 867 and RFC 868 define two time related services for the Internet. We came across two packages which required the daemons implementing these protocols. This port was trivial and required little if any modification. The following descriptions are extracted from [RFC867] and [RFC868].
One daytime service is defined as a connection based application on TCP. A server listens for TCP connections on TCP port 13. Once a connection is established the current date and time is sent out the connection as an ASCII character string (and any data received is thrown away). The service closes the connection after sending the quote.
Another daytime service service is defined as a datagram based application on UDP. A server listens for UDP datagrams on UDP port 13. When a datagram is received, an answering datagram is sent containing the current date and time as an ASCII character string (the data in the received datagram is ignored).
This protocol provides a site-independent, machine readable date and time. The Time service sends back to the originating source the time in seconds since midnight on January first 1900.
One motivation arises from the fact that not all systems have a date/time clock, and all are subject to occasional human or machine error. The use of time-servers makes it possible to quickly confirm or correct a system's idea of the time, by making a brief poll of several independent sites on the network.
When used via TCP the time service works as follows:
S: Listen on port 37 (45 octal). U: Connect to port 37. S: Send the time as a 32 bit binary number. U: Receive the time. U: Close the connection. S: Close the connection.
The server listens for a connection on port 37. When the connection is established, the server returns a 32-bit time value and closes the connection. If the server is unable to determine the time at its site, it should either refuse the connection or close it without sending anything.
When used via UDP the time service works as follows:
S: Listen on port 37 (45 octal). U: Send an empty datagram to port 37. S: Receive the empty datagram. S: Send a datagram containing the time as a 32 bit binary number. U: Receive the time datagram.
The server listens for a datagram on port 37. When a datagram arrives, the server returns a datagram containing the 32-bit time value. If the server is unable to determine the time at its site, it should discard the arriving datagram and make no reply.
./daytimed <15>Apr 15 19:55:11 DAYTIMED[5398]:DAYTIME daemon starting <13>Apr 15 19:55:11 DAYTIMED[5398]:Initialising communications served 'daytime' request from 11.1.8.205 via 'tcp' served 'time' request from 11.1.8.205 via 'tcp' |
JacORB is a free Java object request broker written by Gerald Brose of the Institute for Information Technology Berlin (See Gerald's WWW site .) JacORB comes with full source code and a number of example programs. Additionally, both IDL and Java code for all OMG object services defined in Common Object Servics Specification, Volume I, Revision 1.0, March 1994 are included in this distribution.
The IDL-Java language mapping provided by the JacORB IDL compiler is close to the OMG standard in IDL/Java Language Mapping. Most of the differences between these mappings should not be important or even visible for application programmers.
The ASCII/EBCDIC issues have not been addressed yet. Running the accompanying demonstration programs will only give the correct results on like platforms.
This is an early release of the package, I have not had time to exercise all of the features.
Apache is an HTTP server designed as a plug-in replacement for the NCSA server version 1.3 (or 1.4). It fixes numerous bugs in the NCSA server and includes many frequently requested new features, and has an API which allows it to be extended to meet users' needs more easily.
Details of the latest version can be found on the Apache HTTP server project page under http://www.apache.org/.
The documentation available as of the date of this release is also included, in HTML format, in the htdocs/manual/ directory. For the most up-to-date documentation can be found on apache.org.
$Revision: 1.39 $
Welcome to inn 1.5.1
This is the public release of version 1.5.1 of InterNet News. This work is sponsored by the Internet Software Consortium.
This release, while having various bug fixes and a few minor enhancements, is mostly here to fix a security hole in parsecontrol. It is recommended that if you are running 1.5 you upgrade immediately. If you're running 1.4 you have the same bug present, and should probably upgrade as well.
1.4sec was the last release produced by Rich Salz, the original author of INN. 1.4sec came out in the middle of 1993. After that Rich was unable to dedicate the time necessary to maintain it (and I now fully understand why). Dave Barr unofficially took over and produced 4 releases with a lot of help from the user community. Dave's 4 releases contained a lot of bug fixes and some functionality additions.
Rich Salz, at the beginning of 1996, handed the source pool over to me as part of the agreement that the ISC would take over maintenance and development of future INN releases. Starting with Rich's post 1.4 source pool, I merged in the changes that had occured through the 4 'unoff' versions, and then added quite a few bug fixes, enhancements etc. 3 alpha and 2 beta versions later you have the result of all that work.
Even though most of the code has been written by other people, I'm the one to blame for all this. I went cross-eyed double checking every change that went into this release, so all errors are mine.
Rich Salz deserves a big thank-you for writing INN in the first place and then being kind enough to bless me and the ISC as the keepers of the flame. Paul Vixie at the ISC provided the financing necessary to keep me in espresso coffee through too many sleepless nights (and the boot up the backside to make the nights sleepless). Dave Barr did a fine job holding the fort while INN was in limbo, and getting INN into its current form without the unoff releases to work with would have been a much longer job.
Many, many other people have help in various ways. As I said above, most of the code has been written by other people and my job has been to stitch it all together. The CONTRIBUTORS file lists who helped.
I'm gathering information on who uses INN. If you haven't do so for a previous version of INN, then please do the following:
uname -a | Mail -s "1.5.1 usage survey" inn-survey@isc.org
I will appreciate it. You won't get a reply.
I'm interested in all bug reports. Not just on the programs, but on the documentation too. Please send *all* such reports to:
inn-bugs@isc.orgeven if you post to usenet, please CC the above address. All other INN mail should go to:
inn@isc.org
For general "how do I do this" questions you should post to news.software.nntp as there a lot of experienced INN users there, and I don't have the time necessary to help except when something is obviously broken.
Have fun and while no postcard is necessary (I move too frequently these days).
Note that INN is supported by the Internet Software Consortium, and although it is free for use and redistribution and incorporation into vendor products and export and anything else you can think of, it costs money to produce. That money comes from ISP's, hardware and software vendors, companies who make extensive use of the software, and generally kind hearted folk such as yourself.
The Internet Software Consortium has also commissioned a DHCP server implementation, handles the official support/release of BIND, and supports the Kerberos Version 5 effort at MIT. You can learn more about the ISC's goals and accomplishments from the web page at www.isc.org
James Brister
inn@isc.org (INN related mail)
brister@vix.com (non-INN mail)
InterNetNews is a complete Usenet system. The cornerstone of the package is innd, an NNTP server that multiplexes all I/O. Think of it as an nntpd merged with the B News inews, or as a C News relaynews that reads multiple NNTP streams. Newsreading is handled by a separate server, nnrpd, that is spawned for each client. Both innd and nnrpd have some slight variances from the NNTP protocol; see the manpages.
The distribution is a compressed tar file. A new directory and unpack the tar file in that directory. For example:
; mkdir inn ; cd inn ; ftp ftp.uu.net ftp> user anonymous <you@your.host.name> ftp> type image ftp> get news/nntp/inn/inn.tar.Z inn.tar.Z ftp> quit ; uncompress <inn.tar.Z | tar vxf - ; rm inn.tar.Z
The installation instructions are in Install.ms. This is an nroff/troff document that uses the -ms macro package, and is about 30 typeset pages. (If you have groff use "-mgs".) The distribution has this file split into two pieces; you can join them by typing either of the following commmands:
; make Install.ms ; cat Install.ms.? >Install.msYou should probably print out a copy of config/config.dist when you print out the installation manual.
Please read the COPYRIGHT. This package has NO WARRANTY; use at your own risk.
When updating from a previous release, you will usually want to do "make update" from the top-level directory; this will only install the programs. To update your scripts and config files, cd into the "site" directory and do "make clean" -- this will remove any files that are unchanged from the official release. Then do "make diff >diff"; this will show you what changes you will have to merge in. Now merge in your changes (from where the files are, ie. /usr/lib/news...) into the files in $INN/site. (You may find that due to the bug fixes and new features in this release, you may not need to change any of the scripts, just the configuration files). Finally, doing "make install" will install everything.
If you have a previous release you will probably also want to update the pathnames, etc., in the new config file from your old config. Here is one way to do that:
% cd config % make subst % cp config.dist config.data % ./subst -f {OLDFILE} config.data
where "{OLDFILE}" names your old config.data file.
Configuration is done using subst. Subst is in config/subst.sh and doc/subst.1. The history file is written using DBZ. The DBZ sources and manual page are in the dbz directory. Unlike subst, DBZ is kept separately, to make it easier to track the C News release. The subst script and DBZ data utilities are currently at the "Performance Release" patch date. Thanks to Henry Spencer and Geoff Collyer for permission to use and redistribute subst, and to Jon Zeef for permission to use DBZ as modified by Henry.
This version includes new support for TCL filtering of articles (to either reject or accept them). This work is done by Bob Halley. See the file README.tcl_hook for more details.
This version includes support for Geoff Collyer's news overview package, known as nov. Nov replaces the external databases used by nn, trn, etc., with a common text database. INN support includes programs to build and maintain the overview database, and an XOVER command added to nnrpd (the news-reading daemon) that is becoming a common extension to fetch the overview data from an NNTP connection. Nnrpd uses the overview database internally, if it exists, making certain commands (e.g., XHDR) much faster. The nov package includes a newsreader library that you will need, and some utilities that you will not; it is available on world.std.com in the file src/news/nov.dist.tar.Z. Prototypes of modified newsreaders are in the in src/news/READER.dist.tar.Z -- most maintainers will be providing official support very soon. To make it explicit: if you already have a newsreader that can use the overview database, either via my NNTP xover command, or by reading directly from NFS, then INN has all you need.
Analog is created by Stephen Turner and is in use on virtually every computer platform there is. It is the most popular web log analyzer in the world, and works on Windows, Macintosh, Unix, OS/390, OS/2, VMS, now has been ported to the VM/CMS system by Gordon Wolfe of the Boeing Company. The Analog program on VM/CMS runs as an OpenEdition program, so you must have OpenEdition installed and running on your system.
For more information on Analog's capabilities, see the web site at http://www.statslab.cam.ac.uk/~sret1/analog/.