IEN-185
 INDRA Note 1101
 May 26, 1981







              DEVELOPMENT OF UK/US NETWORK SERVICES

                  AT UNIVERSITY COLLEGE, LONDON






                        Robert T. Braden

                       Peter L. Higginson





                             IEN-185
                          May 26, 1981










        ABSTRACT: This  document  describes  the  second-
        generation  service facility under development at
        UCL, to connect the DARPA Catenet with  X25-based
        networks  in the UK.  The facilities will include
        a  Terminal  Protocol  Converter,   a   Transport
        Service Gateway, and an "IP Tunnel".






                        INDRA Note # 1101
                 Department of Computer Science
                   University College, London





                            CONTENTS




  1. INTRODUCTION...........................................3


  2. PROTOCOL CONSIDERATIONS................................5

     2.1 Routing and Addressing.............................5
     2.2 Protocol Conversion................................7
     2.3 Cost Minimisation..................................8
     2.4 Access Control.....................................8

  3. SERVICE FACILITIES.....................................9

     3.1 Terminal Service...................................9
     3.2 File Transfer Traffic..............................10
     3.3 MOD Services.......................................11

  4. TERMINAL PROTOCOL CONVERTER............................15


  5. CONCLUSIONS............................................19


  6. REFERENCES.............................................20





























                            [Page 1]


UK/US Service at UCL                                      IEN-185

1. INTRODUCTION

This report  describes  the  network  interconnection  facilities
being  developed  at  University College, London (UCL) to support
US/UK   collaborative   research   computing.    Briefly,   these
facilities  will  link  users and resources connected to the ARPA
Catenet of the US Department of Defense with users and  resources
on  several  public  and  private  networks  in the UK.  Terminal
access to remote time-sharing systems, file  transfer,  and  mail
services are to be provided [1,2].

The facility to be described will be a "second generation"  US/UK
network  interconnection  at  UCL,  replacing  a  UK/ARPANET link
established at UCL  in  1974.   UK  users  accessed  the  earlier
service  through SRCNET [3,15], a network operated by the Science
Research Council; through EPSS [4], an experimental  public  data
network;  or  by direct dial-in to a TIP at UCL.  This service is
now almost  totally  obsolete,  as  the  networking  worlds  have
changed markedly on both sides of the Atlantic.


 (1)    Protocol Changes

        The ARPANET has been embedded in the "Catenet",  i.e.,  a
        network  of  networks.   To  support  host  communication
        across the Catenet,  the  ARPANET  host-to-host  protocol
        "NCP"  [5]  has  been  replaced by an end-to-end protocol
        TCP, operating over  an  internetwork  datagram  protocol
        layer,  IP [6,7].  On the other hand, the ARPANET higher-
        level  terminal protocol Telnet  and  the  file  transfer
        protocol  FTP have been kept essentially unchanged by the
        Catenet [5].

        Meanwhile,  the  UK  has  generally  adopted  the  CCITT-
        recommended  X25  network-level  protocol  as well as the
        terminal  access  protocols  X3,   X28,   X29   (commonly
        collected under the rubric "XXX" or "Triple-X") [8].

        Furthermore, UK working groups have adopted  a  transport
        service,  NITS  [9,14]  (known as "Yellow Book"  from the
        colour of the document's  cover),  and  a  file  transfer
        protocol, NIFTP [10] (known as "Blue Book").  Both are de
        facto  UK  standards  and  have  been  submitted  to  the
        international standards bodies CCITT and ISO.

        Thus, the UK has moved to  adopt  international  standard
        protocols,  all  of  which  differ from the corresponding
        DARPA Catenet protocols.







                            [Page 3]


UK/US Service at UCL                                      IEN-185

 (2)    Network Changes

        The DARPA Catenet includes the ARPANET, a number of local
        networks,  and  a  satellite  network  SATNET  [11].   In
        particular, SATNET links the US continental ARPANET  with
        PPSN  [13], a packet-switching network of the UK Ministry
        of Defense (MOD).  The gateway between PPSN  and  SATNET,
        which  is  located  at  UCL,  has local ports into SATNET
        which provide one of the paths for US/UK interconnection.

        Public data networks have become a reality both in the US
        and  in  the  UK.   In  the  US,  "Valued-Added Networks"
        (VAN's)  such  as  Telenet  and  Tymnet  have  come  into
        existence.   In  the  UK,  the  government-owned  British
        Telecom has installed a  public  packet-switched  network
        PSS [12].

        PSS uses the standard protocols X25, X28, and  X29.   PSS
        users  have agreed to use NITS, NIFTP, and an enhancement
        of X29 called TS29 (the  "Green  Book")  [13].   PSS  has
        created a set of de facto national protocol standards for
        the UK, and private data networks are  likely  to  strive
        for compatibility with it. In particular, SRCNET has been
        moving towards almost  complete  compatibility  with  PSS
        protocols [15].

        Finally, an international packet switching service, IPSS,
        now links the UK with the Value-Added networks in the US,
        Canada, and many other countries.  In the UK,  IPSS  will
        shortly be linked to PSS.  In the US, BBN is developing a
        "VAN Gateway" which will link  the  ARPANET  to  Telenet.
        Thus there will shortly be two paths linking UCL with the
        US ARPANET -- the X25-based public  carrier  facility  of
        PSS/IPSS/VAN,  and  the private DARPA Catenet facility of
        SATNET.


 (3)    New Administrative Requirements

        Public  data  networks  have  usage  charges  for   their
        services.   This  in  turn will force UCL to provide both
        access control and accounting  for  these  services,  and
        leads  to  cost minimisation considerations that have not
        been necessary previously.

        Furthermore, there are now at least two different classes
        of  users  for the internetwork services provided by UCL;
        these classes will see quite different kinds of services.
        The  SATNET  path  to  the US, which is available only to
        authorised DARPA Catenet users,  has  no  usage-dependent
        charge. Both the MOD and British Telecom are concerned to




                            [Page 4]


UK/US Service at UCL                                      IEN-185

        limit the use of this route.  Others in the UK  must  use
        the IPSS path, whose usage costs are very significant.


The magnitude of the packet and connect charges on IPSS, together
with  the  technology  of SATNET, will force important changes in
the mode of ARPANET use from the UK.

 (a)    The character-at-a-time, server-echo mode of terminal use
        which  is customary on most ARPANET hosts will be far too
        costly on the  IPSS  path.  It  will  still  be  feasible
        (although  somewhat  awkward,  because of long delays) on
        SATNET, but it makes very inefficient use of the  limited
        channel capacity.

 (b)    The ARPANET practice of  effectively  broadcasting  mail,
        i.e.,  sending  individual  copies of the same message to
        all recipients, is uneconomic over IPSS.

 (c)    Since file  transfer  is  expected  to  make  more  cost-
        effective  use of the channel than does terminal traffic,
        it will generally be cheaper for UK  users  to  send  and
        receive  their  ARPANET  mail  in the UK, transferring it
        across IPSS only in bulk. This implies  that  UCL  should
        provide a service host for receiving and composing mail.




2. PROTOCOL CONSIDERATIONS

The preceding section described the current environment  for  the
network interconnection facilities under development at UCL. This
section covers the communication protocol issues relevant to  the
design  of  these  facilities.   Later  sections  will  give more
details of the software and hardware required.

In general, two entirely different  protocol  domains  are  being
linked  -- the CCITT/public data network world of X25, XXX, NITS,
and NIFTP, and the DARPA internetwork world of IP,  TCP,  Telnet,
and  FTP.  There  are  several different types of problems, which
will be considered in turn:  divergent  mechanisms  for  handling
routing  and  addressing, protocol conversions, cost, and access-
control.



2.1 Routing and Addressing

To design the interconnection facility, we need to identify those
protocols  that  provide  (essentially)  the  same functionality,




                            [Page 5]


UK/US Service at UCL                                      IEN-185

i.e., which occupy the same  "protocol layer".  We will  use  the
following terminology:

 (a)    Network Layer

        The network layer provides reliable, ordered transmission
        across virtual circuits spanning a single address domain.

 (b)    Transport Layer

        The transport provides the same services of  the  network
        layer,  and  in  addition provides them end-to-end across
        multiple address domains.

 (c)    User Layer

        We use this term for all protocols "above" the  transport
        layer, for example terminal and file transfer protocols.

To indicate  the  relative  position  of  two  protocols  in  the
heierarchy, we will use the notation:

   A > B

for (higher-level) protocol "A" implemented "over" protocol "B".

Obviously Telnet, XXX, NIFTP, and FTP occupy the  user  layer  as
defined  here.   Furthermore,  by  definition  NITS  occupies the
transport layer.  However, TCP and X25 may each  be  assigned  to
either  the  transport  or  the network layer, depending upon the
situation.

For example, the public data networks use  a  uniform,  globally-
unique  set  of  14-digit  addresses, and therefore form a single
address domain.  The gateways between public data  networks  also
provide  a  routing  mechanism.   As a  result, "bare" X25 can be
used as a transport service  over  a  single  network  or  across
interconnected  public  data  networks (e.g., U.S. VAN to IPSS to
PSS).

However, private UK networks (e.g., SRCNET) constitute  different
address  domains.   An  X25  virtual  call  whose two ends lie in
different address domains requires a transport service to specify
the  addressing  and  routing for setting up concatenated virtual
calls.  NITS (Network Independent Transport Service) [9] provides
a   general  source-route  addressing  facility  to  handle  such
multiple address domains.  NITS is then  the  transport  service,
and X25 is demoted to the network layer.

NITS allows multiple independent address domains, but provides no
global  routing  or  addressing mechanism.  Routing decisions are




                            [Page 6]


UK/US Service at UCL                                      IEN-185

assumed to be sufficiently static that they  can  be  built  into
tables in the originating host and in intermediate gateways.

The DARPA internet protocol, on the other hand, assumes a  single
global  addressing  domain  and  dynamic  routing  implemented by
cooperating gateways. Addresses are limited  to  32-bit  numbers.
The  protocol  combination  "TCP  >  IP",  used within its "home"
Catenet, constitutes a transport  service.   However,  TCP  >  IP
fails  to provide general end-to-end addressing between a Catenet
host and an X25 host, since TCP > IP  cannot  specify  X25-domain
addresses.

One solution to this problem of end-to-end  addressing  has  been
proposed for the BBN VAN gateway [24]: the gateway will contain a
fixed table to map X25 addresses  to  and  from  "fake"  internet
addresses.   To simplify maintenance of this and other equivalent
tables, it may be desirable for the gateway  to  do  the  address
mapping by referencing a "Name Server", available to all relevant
gateways.

Another type of solution is provided by Bennett's  proposal  [18]
for  a true transport service protocol, based on NITS, to be used
above TCP/IP.  In practice, a combination of these techniques may
be used.

Finally, we can solve the addressing problem for  terminal  users
by  forcing  them  to  interact with each gateway between address
domains to specify the target address in the  new  domain.   This
was the solution in the UK/US service being replaced.



2.2 Protocol Conversion

Both paths between the  US  and  UCL  will  carry  DARPA  Catenet
protocols,  using  TCP  >  IP.   Over  the VAN/IPSS/PSS route, IP
datagrams will be encapsulated in X25 packets; this is  sometimes
referred  to  as an "internet tunnel" or "IP tunnel". The BBN VAN
Gateway will form the Catenet end of such an  IP  tunnel.   Since
Catenet  protocols  are  being  brought to the UK, the conversion
facilities must in general be in the UK; in particular, they will
be at UCL.

For interactive terminal  service,  the  protocols  used  on  the
Catenet  and  the  UK  sides  of  the  gateway differ at both the
transport level and the  user  level.   Thus,  the  ARPANET  uses
Telnet > TCP while  XXX > X25 (or TS29 > NITS > X25) will be used
in the  UK.     In  addition,  SRCNET  uses  a  private  terminal
protocol called ITP [15], in the hierarchy:  ITP > NITS(subset) >
X25.





                            [Page 7]


UK/US Service at UCL                                      IEN-185

Therefore, UCL must provide a "terminal gateway", i.e., a gateway
which  operates  at  the  terminal protocol level.  This facility
will terminate each of the terminal protocols,  transforming  one
protocol  into  another.   We  refer  to this protocol conversion
gateway as the "Terminal Protocol Converter".



2.3 Cost Minimisation

The encapsulation of IP datagrams over  the  VAN,  IPSS  and  PSS
public data networks raises some difficult problems [23].  First,
it is  essential  to  minimise  usage  cost  on  these  networks,
particularly  on IPSS.  Since IPSS calls accumulate both connect-
time and packet charges, cost can be reduced by multiplexing  all
the  encapsulated  traffic  over a single virtual call; this call
should be open only when the path is in use.

It is not clear whether it will be worthwhile  to  pack  multiple
internet  datagrams  into  a  single  X25  packet.   PSS and IPSS
charges are based on a data unit of 64 bytes; at most  two  small
TCP packets would fit into a single unit, so the average lost due
to internal fragmentation is roughly equivalent to one small  TCP
packet.  Hence, packing datagrams will save less than a factor of
2 in cost.

As  an  end-to-end  protocol,  TCP   raises   some   very   great
difficulties  with  controlling  cost  inflation  due  to  unwise
retransmissions [24].



2.4 Access Control

The use of an IP tunnel through public data  networks  raises  an
urgent  access  control  problem  [23],  again  because  of usage
charges.

At the UCL end of the IP tunnel, the Terminal Protocol  Converter
will  require  UK  users  to log in before opening a virtual call
over PSS/IPSS.  This log in will provide positive  identification
to check authorization and record usage.

However, the "BBN VAN Gateway", at the US end of the  tunnel,  is
planned  as  a pure internet datagram gateway [24].  This implies
that it will be able to apply access control only on the basis of
the source and destination internet addresses and the VAN address
(of UCL). For example, it will contain  a  list  of  "authorised"
Catenet  hosts  which  are allowed to send packets to UCL through
the VAN.  Clearly, such a low-level mechanism cannot limit access
to specific users or record user costs.




                            [Page 8]


UK/US Service at UCL                                      IEN-185

3. SERVICE FACILITIES

This section will describe the network interconnection facilities
under development at UCL.  These facilities will handle two major
types of traffic -- interactive terminals and  file  transfer  --
which pose different problems.



3.1 Terminal Service

Figure 1 shows schematically the terminal service  operation  for
those users required to use IPSS.

The  Terminal  Protocol  Converter  must  implement   a   command
language,  reached  by  an  appropriate  escape  sequence.   This
language will include a "login" command and a "connect"  command.
The "connect" command will specify the address in the destination
domain.  The Protocol Converter must enforce user login,  because
the  use  of  PSS  or  IPSS will result in the expenditure of UCL
funds.  This will solve UCL's access control problem for terminal
service, and allow the proper recording of usage.

Since the terminal  protocols  on  the  two  sides  differ  quite
significantly   in   facilities,  fully-automatic  conversion  of
terminal protocols is not possible [22].   The  command  language
must allow the user to modify the conversion or to override it in
a particular case.

We can give a simple example of the need for  user  control  over
conversion.   ARPANET  hosts  typically use Telnet negotiation to
cause character echoing at the host end rather than the  terminal
end  of  the  connection.   Our Terminal Protocol Converter could
easily translate "WILL ECHO" from the Catenet automatically  into
the equivalent in ITP or XXX.  However, the use of host echo mode
seriously increases IPSS costs, and cannot  be  dictated  by  the
hosts.  The  Protocol Converter will therefore refuse the remote-
echo negotiation,  but  will  provide  a  command  (available  to
authorised users) to allow host echo.

Access to the UNIX system shown in Figure  1  will  be  primarily
(and  perhaps  excusively) for UK users to compose and read mail.
Bulk movement of mail files between this UNIX system and  the  US
will be performed by NIFTP, as shown later (Figure 2).

From the DARPA Catenet viewpoint, the Terminal Protocol Converter
will  terminate  the  IP,  TCP  and  Telnet  protocol  layers; it
therefore will be addressed as an internet host.  Between the BBN
VAN  Gateway  and the UCL facility labelled "IP Tunnel", internet
datagrams will be transported over the public data networks (VAN,
IPSS  and  PSS)  by encapsulation in X25 packets. The "IP Tunnel"




                            [Page 9]


UK/US Service at UCL                                      IEN-185

will perform this encapsulation, but we do not plan to make it  a
gateway;  that  is,  it  will  not  perform  routing functions or
implement GGP.

Notice that PSS appears twice in Figure 1 -- as the link to IPSS,
transporting  encapsulated internet datagrams to the US, and also
for terminal access to UCL from UK users. This is  an  indication
the  data  paths on this diagram are logical; in fact, there will
initially be only one physical path (line) to PSS from UCL.



3.2 File Transfer Traffic

NIFTP [10] will be used for the transfer of files and  bulk  mail
between  the  DARPA Catenet and the UK.  This requires the use of
suitable relays for file and mail transfer in the US.

UCL has therefore created an NIFTP implementation  on  a  TOPS-20
machine  (ISIE), using either NCP or TCP as its transport service
[17].  We plan to implement an automatic file transfer  relay  at
ISIE, based upon this implementation.

A mail relay is also being planned  [16],  to  interface  to  the
ARPANET mail service.

Although the same user-level protocol, NIFTP, is used throughout,
there  will  be different transport-level protocols on the UK and
the Catenet sides.  For file transfer, therefore, UCL  need  only
provide  a  gateway  operating  at  the  transport service level,
essentially independent of the user-level protocol on top.

On the UK side, the transport service will be NITS > X25.  On the
Catenet   side,   a   true  transport  protocol  with  end-to-end
addressing is needed, since the X25 call is to be  "concatenated"
with  the TCP connection.  As discussed previously, "bare" TCP is
not  capable  of  transmitting  the  required  X25  address.   We
therefore  plan  to  adopt  Bennett's proposal [18] for (a simple
subset of) NITS above TCP.  As shown in Figure 2, therefore, file
transfers  from  the  US  across  IPSS  will use the (incredible)
protocol hierarchy:

    NIFTP > NITS > TCP > IP > X25

Figure 2, which shows schematically the file tranfer  paths  seen
by  users who are are required to use IPSS, is topologically very
similar to Figure 1.  Instead of the Terminal Protocol Converter,
Figure 2 has a Transport Service Gateway.

The Transport Service Gateway does create a  new  access  control
and  accounting  problem.   The  NIFTP initiating the call should




                            [Page 10]


UK/US Service at UCL                                      IEN-185

"log in" at the Transport  Service  Gateway,  but  NIFTP  has  no
mechanism  for such a log in.  We plan to use the SRCNET solution
for this problem [20] -- the necessary login information will  be
buried at the appropriate point in the NITS address string, to be
interpreted by the Transport Service Gateway.



3.3 MOD Services

Finally, Figure 3 shows the service facility as seen by  the  MOD
users.   The two datagram gateways shown in this diagram are both
in service, maintained and controlled by BBN.

Most of the paths shown on this diagram are part of the  internet
catenet.   The  "IP  tunnel" is used here to provide an alternate
route to PPSN.

In these diagrams, we have  generally  omitted  PSTN  access  for
terminals.   However,  UCL will have a TAC to provide such access
for the MOD.  In addition, MOD users are expected to  access  UCL
via a PSS PAD using X25.

The UNIX system for mail services will be a PDP-11/34  initially,
but might be upgraded in the future to a PDP-11/44.  This machine
will also be used to monitor and control the  service  functions,
to  accumulate  accounting  data,  and to maintain the login data
base.

The function labeled "UCL Datagram Gateway" in Figure 3 is a  PDP
11/35   running   BBN's   gateway   code.    All  the  other  UCL
interconnection functions will  be  implemented  within  LSI/11's
running under MOS, using code written mostly in "C".






















                            [Page 11]


UK/US Service at UCL                                      IEN-185

     FIGURE 1.   VIEW OF UK/US SERVICE FOR SRC TERMINAL USER



/////////
//     //             ///////         ///////         ///////
//   C //    _____    //   //         //   //         //   //
// D a //   |     |   //   //   ___   // I //   ___   //   //
// A t //   | BBN |   // V //  |   |  // P //  |   |  // P //
// R e //---| VAN |---// A //--| G |--// S //--| G |--// S //---
// P n //   | Gwy |   // N //  |___|  // S //  |___|  // S //   |
// A e //   |_____|   //   //         //   //         //   //   |
//   t //             ///////         ///////         //   //   |
//     //                                             ///////   |
/////////                                                       |
                                                                |
                                                                |
    Telnet > TCP > IP > X25                                     |
    ____________________________________________________________|
   |
   |                                                            ///////
 ..|...........................................                 //   //
 . |                                          .                 //   //
 . |              U    C    L                 .    XXX > X25    // P //
 . |                                          .    ____________ // S //
 . |                     ________________     .   |             // S //
 . |   ______           |                |    .   |             //   //
 . |  |      | Telnet > |                |--------              //   //
 . |  |  IP  | TCP > IP |    Terminal    |    .                 ///////
 .  --|      |----------|    Protocol    |    .
 .    |Tunnel|          |    Converter   |    .
 .    |______|          |                |--------              ///////
 .                      |                |    .   | ITP > X25   //   //
 .                      |________________|    .   |    and      // S //
 .                          | Front |         .   | XXX > X25   // R //
 .                          |  End  |         .   |____________ // C //
 .                          |_______|         .                 // N //
 .                              |             .                 // E //
 .                              |             .                 // T //
 .                              |             .                 //   //
 .      (terminal access for    |             .                 ///////
 .          msg service)        |             .
 .                     ====================   .
 .                    ||                  ||  .
 .                    ||                  ||  .
 .                    ||   U  N  I  X     ||  .
 .                    ||                  ||  .
 .                    ||                  ||  .
 .                     ====================   .
 ..............................................





                            [Page 12]


UK/US Service at UCL                                      IEN-185

  FIGURE 2.  VIEW OF US/UK NIFTP SERVICE FOR SRC USERS



 To: PSS-IPSS-VAN-DARPA Catenet  (see Fig. 1)
   ^
   |
   |
   |   NITS > TCP > IP > X25
   |                                                            ///////
   |                                                            //   //
   |                                                            //   //
 ..|.............................................               // P //
 . |                                            .    __________ // S //
 . |                                            .   |           // S //
 . |    ______            ______________        .   |           //   //
 . |   |      | NITS >   |              |NITS > X25 |           //   //
 . |   |  IP  | TCP > IP |   Transport  |-----------            ///////
 .  ---|      |----------|   Service    |       .
 .     |Tunnel|          |   Gateway    |-----------
 .     |______|          |              |NITS > X25 |           ///////
 .                       |______________|       .   |           //   //
 .                              |               .   |           // S //
 .                              |               .   |           // R //
 .                              |               .   |__________ // C //
 .                              |               .               // N //
 .                    (local    |               .               // E //
 .                     transport|               .               // T //
 .                     service) |               .               //   //
 .                              |               .               ///////
 .                              |               .
 .                     =========|==========     .
 .                    ||   U  N |I  X     ||    .
 .                    ||        |         ||    .
 .                    ||   _____|______   ||    .
 .                    ||  |   NIFTP    |  ||    .
 .                    ||  |   / mailer |  ||    .
 .                    ||  |____________|  ||    .
 .                    ||                  ||    .
 .                     ====================     .
 ................................................














                            [Page 13]


UK/US Service at UCL                                      IEN-185

  FIGURE 3.  VIEW OF US/UK CONNECTIONS FOR MOD

/////////                           ///////
// A C //          ______           // S //
// R a //         |      |          // A //
// P t //         | Data-|          // T //
// A e //---------| gram |----------// N //------
//   n //         |  Gwy |          // E //      |
//   e //         |______|          // T //      |
//   t //                           ///////      |
/////////                                        |
             TCP > IP                            |
 ________________________________________________|             ///////
|                                                              //   //
| ........................................................     // P //
| .                                                      .     // P //
| .       ---------------------------------------------------- // S //
| .      |           IP                                  .     // N //
| .      |                                    _________  .     //   //
| .   _______   IP                           | "IP     | .     ///////
| .  | UCL   |-------------------------------|  Tunnel"| .  IP ___|___
| .  | Data- |  TCP > IP                     |_________| .    |  MOD  |
-----| gram  |--------------------------             |   .    |  VAN  |
  .  | Gate- |                     _____|_____       |   .    |  Gwy  |
  .  |  way  | TCP > IP           |           |      |   .    |_______|
  .  |_______|------> (IMP        | Transport |      |   .    IP >|X25
  .      |            +TAC)       | Service   |      |   .        |
  .      |                        | Gateway   |      |   .     ///////
  .      |     _______________    |___________|      |________ //   //
  .      |    |               |         |             IP > X25 //   //
  .      |    |     UCL       |         |                .     // P //
  .       ----|   Terminal    |------------------------------- // S //
  .Telnet >   |   Protocol    |         |           XXX > X25  // S //
  .  TCP > IP |   Converter   |         |                .     //   //
  .           |_______________|         |                .     //   //
  .                | F.E.|              |                .     ///////
  .                |_____|              |                .
  .                   |___________      | (file          .
  .                               |     |  transfer)     .
  .         (terminal access for  |     |                .
  .             msg service)   =========|==========      .
  .                           ||   U  N |I  X     ||     .
  .                           ||        |         ||     .
  .                           ||   _____|______   ||     .
  .                           ||  |   NIFTP    |  ||     .
  .                           ||  |   / mailer |  ||     .
  .  U  C  L                  ||  |____________|  ||     .
  .                            ====================      .
  ........................................................






                            [Page 14]


UK/US Service at UCL                                      IEN-185

4. TERMINAL PROTOCOL CONVERTER

In order to summarize the conversions  to  be  performed  by  the
Protocol  Converter,  we  introduce  the  symbol  "<==>"  to mean
"conversion to/from".  For example,

        TCP < Telnet  <==>  ITP > X25

indicates that the terminal protocol Telnet operating "over"  the
transport  protocol  TCP  is  to  be  converted into the terminal
protocol ITP operating over the transport service X25.

The two primary conversions to be performed are:

   (1)  TCP < Telnet  <==>  ITP > X25

   (2)  TCP < Telnet  <==>  XXX > X25

As mentioned earlier, we want to incorporate  access  to  a  UNIX
system as a server.  It is convenient to consider the UNIX server
as a new protocol, and add the pseudo-conversions:

   (3)  TCP < Telnet  <==>  UNIX

   (4)  UNIX  <==>  XXX > X25

   (5)  UNIX  <==>  ITP > X25

Finally, we will add a terminal user process which will allow  us
to  access  each  of the other four terminal protocol modules for
testing.  We call this terminal user process "TTY"  and  add  the
pseudo-transformations:

   (6)  TCP < Telnet  <==>  TTY

   (7)  TTY  <==>  ITP > X25

   (8)  TTY  <==>  XXX > X25

   (9)  TTY  <==>  UNIX

You will note that  we  have  listed  all  but  one  of  the  ten
different   conversions  possible  with  the  five  protocols  or
pseudo-protocols.  That omission is:

  (10)  X25 < XXX  <==>  ITP > X25

Someone will probably find a use for this conversion, although it
would not be related to US-UK service.






                            [Page 15]


UK/US Service at UCL                                      IEN-185

     FIGURE 4.  TERMINAL PROTOCOL CONVERTER


      __________________________________________________________
     |                                                          |
     |                                        _______________   |
     |                                       |       |       |  |
     |   ____________________                |       |       |  |
     |  |     |       |      |               |   I   |       |  |
     |  |     |       |  T   |<=============>|   T   |       |  |
     |  |     |   T   |  E   |               |   P   |   X   |  |
     |  |  I  |       |  L   |        ======>|       |       |  |
--------|     |   C   |  N   |       |       |_______|   2   |--------
     |  |  P  |       |  E   |       |       |       |       |  |
     |  |     |   P   |  T   |<=============>|       |   5   |  |
     |  |     |       |      |<===   |       |   X   |       |  |
     |  |_____|_______|______|    |  |       |   X   |       |  |
     |                            |  |   ===>|   X   |       |  |
     |                            |  |  |    |       |       |  |
     |                            |  |  |    |_______|_______|  |
     |                         ___|__|__|__                     |
     |                        |            |                    |
     |________________________|  UNIX      |____________________|
                              |  Terminal  |
                              | Front End  |
                              |____________|
                                     |
                                     |



The Terminal Protocol  Converter  is  implemented  as  a  set  of
"Protocol Processes", one for each terminal-level protocol, and a
common User Command Decoder Process.  Each Protocol Process  (PP)
executes  an  appropriate  terminal protocol module (Telnet, ITP,
X28, X29, etc.) which in turn calls on the appropriate  transport
process  (TCP,  X25,  UNIX, local-TTY).  The User Command Decoder
controls the establishment and termination of user sessions,  and
the  consequent  accounting  and access control.  It also decodes
user commands.

A session generally requires the collaboration of two PP's -- one
to  handle  the "terminal" (user) side of the conversation, while
the other handles the "host" (server) side; see Figure  5.   Each
PP is able to act in either role.

A session is initiated when a user opens a "call" or "connection"
to  a terminal PP, which we will call "PP.term".  PP.term makes a
logical connection to the UCD, which starts  a  dialog  with  the
user.   The  user  will  log  in  and  request  connection  to  a
particular remote host using some protocol.  The  UCD  will  then




                            [Page 16]


UK/US Service at UCL                                      IEN-185

request the appropriate host PP, "PP.host", to open the  call  to
the selected server host.

This mechanism is unsymmetrical with respect to the role  of  the
two  PP's.   The User Command Decoder monitors the input from the
terminal side, to detect and parse Protocol  Converter  commands.
User  messages  generated  as a result, and host output messages,
are sent directly to PP.term.



FIGURE 5.  TYPICAL PROTOCOL CONVERTER SESSION


 ____________________________________________________
|                                                    |
|               User Command Decoder                 |
|                    ____________                    |
|         +-------> |            |  host input       |
|         |         |    UCD     |------>            |
|         |    <----|            |      |            |
|         |    |    |____________|      |            |
|         |    |                        |            |
|         |    |                        |            |
|         |    V    <<Protocol          V            |
|       __________     Processes>>  __________       |
|      |          |                |          |      |
|      |  PP.term | <------------- |  PP.host |      |
|      |          |     host       |          |      |
|      |..........|      output    |..........|      |
|           |                           |            |
|           V                           V            |
|       Interface                   Interface        |
|      to Transport                to Transport      |
|      Service                     Service           |
|                                                    |
|____________________________________________________|



This scheme using two PP's for each  session  is  an  attempt  to
avoid   the  combinatorics  implied  by  five  different terminal
protocol handlers.  It does have  a  cost,  however  --  we  must
design  an  internal  protocol for communication between the PP's
and with the UCD.  The conversion function is  effectively  split
into two parts, in each of the PP's.

The internal protocol includes  a  flow  control  mechanism.   No
session  can  obtain  more than its maximum share of the buffers,
and if a PP stops writing data, back-pressure will soon stop  the
corresponding  PP  from  reading  new  data for the same session.




                            [Page 17]


UK/US Service at UCL                                      IEN-185

Each session has a guaranteed minimum number of  buffers,  so  it
can  keep  going,  although  perhaps  slowly,  when the system is
congested.  In the absence of congestion, the number  of  buffers
in  use by each session will fluctuate between this minimum and a
maximum, depending upon the relative rates of input and output of
data.

As mentioned earlier, the Terminal Protocol  Converter  is  being
implemented  in "C" and executed under MOS on LSI/11's.  As shown
in Figure 4, the Terminal Protocol Converter  must  include  code
for  IP,  TCP,  Telnet,  ITP,  X28, X29, X25, as well as the UNIX
driver, a command interpreter, and  access  control,  monitoring,
and   accounting  facilities.   This  code  greatly  exceeds  the
standard 16-bit address space of an LSI/11.  At a later stage, we
may  use a virtual-memory MOS system on an PDP 11/23; at present,
however, a much more straight-forward approach  is  being  taken.
The  code  is  being  split  across three LSI/11s.  The Transport
Service Gateway and the IP Tunnel will also be contained  in  the
same LSI/11s, since they share many modules.

The  Terminal  Protocol  Converter  LSI/11s  must  be   able   to
intercommunicate and to communicate with the UNIX system.  In the
early development of the service facility,  we  have  been  using
1822  interfaces  for  this  purpose.   However,  as  soon as the
drivers are fully debugged, we will start using a  local  network
--  specifically,  a  Cambridge  Ring  -- for these  inter-LSI/11
links within the Terminal  Protocol  Converter.   The  Ring  will
similarly  implement  some  of the other intra-UCL paths shown in
Figures 1-3.

To maximise our flexibility in assigning  modules  to  particular
LSI/11s,  we  defined  a  standard  form  for  all transport (and
network) services, including TCP and X25 [25].  This standardized
interface, shown as a dotted line in Figure 5, is known at UCL as
the  "MOS  Clean  and  Simple"  interface.   We  then  built   an
"Interprocessor Clean and Simple" (IPCS), which may be considered
to be the software  equivalent  to  an  "extender  board".   IPCS
allows  the  two  sides  of the interface to operate in different
LSI/11s as if they were in the same LSI/11.  IPCS itself has been
implemented  for  LS/11's  using  either  an 1822 connection or a
low-level transport protocol on the Cambridge Ring.

In  assigning  functions  to  LSI/11's,  we  have  a  number   of
constraints.     Many    of    these   are   purely   programming
considerations, such as  module  pairs  whose  common  interfaces
requires both modules to be in the same processor.  However, some
constraints are imposed by the network protocols themeselves.








                            [Page 18]


UK/US Service at UCL                                      IEN-185

In particular, the use of datagrams by IP is  well  suited  to  a
distributed  organization,  but the problem appears harder with a
virtual-call protocol  like  X25.   For  example,  the  SRI  Port
Expander  multiplexes  logical  internet hosts onto a single host
port, permitting the operation of  TCP  in  multiple  processors.
However,  there  is  no  corresponding  facility  for  X25.   For
example, the single PSS line must be  attached  to  a  particular
processor, which then becomes THE "PSS access machine".




5. CONCLUSIONS

This  document  has  described  the   second-generation   service
facility  under  development at UCL, to connect the DARPA Catenet
with X25-based networks in the UK.  This facility is  implemented
as a complex set of protocol handling modules, operating an a set
of inter-communicating LSI/11's.  A local network, the  Cambridge
Ring, will be used for these interconnections.  The operation and
monitoring of this system will be performed by a program  running
under UNIX on a PDP/11-34.

Linking the DARPA Catenet with public data networks  has  created
important problems of access control, in addition to the familiar
ones of addressing and routing. At the UCL  end,  we  will  force
user login in order to apply access controls and distribute costs
on a per-user basis.

The complexity of the total systems leads to  difficult  problems
of reliability.  Further research will be necessary in this area.

In the future, we intend to  consider  a  generalisation  of  the
present rather ad hoc distribution of functions in LSI/11's.  The
protocol  conversions  and  network  interconnections  could   be
distributed  into  a  pool  of  equivalent  microprocessors, each
performing a particular network or conversion function [21].  The
Cambridge  Ring  would  be used as a common communication bus for
the processors.  The intent would be to improve  reliability  and
to more easily meet changing protocol requirements.















                            [Page 19]


UK/US Service at UCL                                      IEN-185

6. REFERENCES


 [1]    P. T. Kirstein, "Transatlantic Collaborative  Computing".
        Indra Note 1027, UCL, London, December 1980.

 [2]    P. T. Kirstein, "The Transition Requirements during  1981
        for  UK/US  Services".   Indra  Note  1037,  UCL, London,
        February 1981.

 [3]    J. W. Burren, et. al., "Design for an  SRC/NERC  Computer
        Network",  RL  77-0371A, Rutherford Laboratory, Abingdon,
        1977.

 [4]    C. F. Broomfield, "Packet Switching  -  The  Experimental
        Packet  Switched  Service".  Comp. Commun. Rev., 2, 7-11,
        1975.

 [5]    "ARPANET   Protocols   Handbook".     NIC    7104,    SRI
        International, Menlo Park, 1978.

 [6]    J.  B.  Postel,    "DOD  Standard  Transmission   Control
        Protocol",  RFC  761, USC Information Sciences Institute,
        Marina del Ray, 1979.

 [7]    J. B. Postel,   "DOD  Standard  Internet  Protocol",  RFC
        760,  USC Information Sciences Institute, Marina del Ray,
        1979.

 [8]    CCITT, "Recommendations X3, X25, X28, and X29  on  Packet
        Switched  Data  Services".   Int.  Telecom. Union, Geneva
        1978.

 [9]    P. F. Linington, Ed., "A  Network  Independent  Transport
        Service".  SG3/CP(80)2, Post Office PSS User Forum, Study
        Group 3, The  Computer  Laboratory,  Cambridge,  England,
        February 1980.

 [10]   (anon.), "A Network Independent File Transfer  Protocol".
        FTP-B(80),  Data  Communication  Protocols Unit, National
        Physical Laboratory, Teddington, February 1981.

 [11]   I.  M.  Jacobs,  et.al.,   "General   Purpose   Satellite
        Network".  Proc. IEEE, 66, 11, 1448-1467, 1978.

 [12]   P. T. F. Kelly,  "Non-Voice  Services  -  Future  Plans".
        Proc.  Conf.  Business Telecommunications, Online, 65-82,
        1980.

 [13]   P.  H.  Masterman,  "The  RSRE  Pilot   Packet   Switched
        Network".    Proc.   Intern.   Conf.  on  Data  Networks:




                            [Page 20]


UK/US Service at UCL                                      IEN-185

        Development and Use, London, pp 277-292, 1980.

 [14]   "Character Terminal Protocols over PSS".  PSS User Forum,
        Study Group 3, British Telecom, London, 1979.

 [15]   P. M. Girard, "Protocols in the SRC/NERC Network".  Issue
        No. 5, Rutherford Laboratory, September 1980.

 [16]   C. J. Bennett, "A Simple NIFTP-Based Mail System".  Indra
        Note 1025, UCL, London, January 1981.

 [17]   C.   J.    Bennett,    "TOPS20/TENEX    NIFTP    Overview
        Documenation".   Indra  Note  849, UCL, London, December,
        1979.

 [18]   C. J . Bennett, "Realization of the Yellow Book Transport
        Service Above TCP".  IEN-154, UCL, London, July 1980.

 [19]   C.  J.  Bennett,  "The  Yellow  Book  Transport  Service:
        Principles  and  Status".   IEN-155,  UCL, London, August
        1980.

 [20]   A. S. Dunn, "A User  Authorisation  Scheme  for  SRCNET".
        Rutherford Laboratory, Abingdon, December 1981.

 [21]   P. L. Higginson, "Plans for the Service Project".   Indra
        Note 1007, UCL, London, November 1980.

 [22]   P. L. Higginson, "Mapping Telnet  to  ITP".   Indra  Note
        963, UCL, London, July 1980.

 [23]   P. T.  Kirstein,  "The  Facilities  Needed  in  U.S.  VAN
        Gateways  to  ARPANET  at  Different Levels".  Indra Note
        957, UCL, London, July 1980.

 [24]   J. H. Haverty, "VAN Gateway: Some Routing and Performance
        Issues".   IEN-181,  Bolt,  Beranek,  and  Newman,  Inc.,
        Cambridge, Mass., May 1981.

 [25]   R. T. Braden and  P.  L.  Higginson,  "Clean  and  Simple
        Interface  under  MOS".   Indra  Note  1054, UCL, London,
        February 1981.













                            [Page 21]

mirror server hosted at Truenetwork, Russian Federation.