Index
Home
About
From: goldstein@isdnip.lkg.dec.com
Subject: Re: When Will General Computing Conquer Telecom Switching?
Organization: Digital Equipment Corp.
Date: Wed, 28 Jul 1993 04:56:47 GMT
In article <telecom13.516.14@eecs.nwu.edu> nagle@netcom.com (John
Nagle) writes:
> The interesting thing is that COs still mostly follow the
> model of a big dumb crosspoint run as a computer peripheral. One
> would think by now that switches would be much more distributed, with
> little CPUs all through the switch fabric. But given the amount of
> trouble people still have designing distributed systems, it's not
> clear that using lots of little CPUs would improve reliability, and
> might well make it worse.
Computer folks and telephone folks are a long way apart, that's fer
sure. I do however have a passing familiarity with both
disciplines.
Distributing the logic in a telephone switch is a tricky proposition.
In practice, you need ONE central system-wide state machine or the
whole system won't work. This CPU has to keep track of all of the
system's objects (to use a modern name). These are lines, trunks,
numbers (.ne. lines), bandwidth, etc.
Distributing all of the logic causes major grief. It is not a problem
for POTS services: A stepper is fully distributed, and an electronic
stepper-like switch is possible (and available ...) But if you want
to be competitive and have real Centrex/PBX features, you need multi-
line key telephone sets implemented in software. These have an
arbitrary number of line buttons, each of which has a visual
indication of status, accurate within a second or less.
This is not terribly hard to implement when the keyset control process
can poll the status of each monitored line or number. But when a
switch is fully distributed, where do you look? You get into a messy
information exchange problem that requires lots of bandwidth and tends
to take oodles more programming, yet software is already most of the
cost.
That fully-distributed switching is a losing proposition (in the US
market, where keysets count, but not necessarily Europe where the
culture doesn't expect them) can be seen from industry experience.
Northern's SL-1 and DMS switches always had single state CPUs. The
DMS-100 has a bunch of little microprocessors chugging away at local
real-time tasks, but one "big" CPU (often a 68020 but lately it might
be a bigger Moto chip) runs the real generic. I think the source
language is Protel, a specialized structured language. (The SL-1 was
originally written in the SL-1 language.) These switches are very
successful.
The AT&T 5ESS was first announced with an almost-fully distributed
processing model, but was withdrawn for three years or so and finally
shipped with a lot more power than expected (read: state machine) in
the central module. It does have a lot of distributed power, but at
least there's one core. I think they've given up on trying to get rid
of it, as per some early 5ESS literature which practically apologized
for not being fully distributed.
The Mitel SX-2000 has distributed real-time Motos surrounding a
centralized state machine. It's fairly successful. But look... The
ITT 1240 was fully distributed. It never worked in the US market and
was pulled; ITT sold out to Alcatel. The 1240 is a big hit in Europe
where there's no need for keysets. The Rolm VLCBX was fully
distriubted and years late. The Wescom 580DSS divvied its CPU load
into six processors and look how far they got.
In any case, fully-distributed operation does not add reliability. A
distributed stepper had no single failure point. But with processors,
you can distribute all you want and still have a single bug in the
code bring down all instances of it. Witness the famous event,
chronicled here in the Digest, when AT&T's SS7 network crashed due to
one misplaced "break" statement.
Fred R. Goldstein goldstein@carafe.tay2.dec.com
Opinions are mine alone; sharing requires permission.
From: floyd@hayes.ims.alaska.edu (Floyd Davidson)
Subject: Re: When Will General Computing Conquer Telecom Switching?
Organization: University of Alaska Computer Network
Date: Thu, 29 Jul 1993 05:35:25 GMT
In article <telecom13.519.12@eecs.nwu.edu> goldstein@isdnip.lkg.dec.
com writes:
> That fully-distributed switching is a losing proposition (in the US
> market, where keysets count, but not necessarily Europe where the
> culture doesn't expect them) can be seen from industry experience.
> Northern's SL-1 and DMS switches always had single state CPUs. The
> DMS-100 has a bunch of little microprocessors chugging away at local
> real-time tasks, but one "big" CPU (often a 68020 but lately it might
> be a bigger Moto chip) runs the real generic. I think the source
> language is Protel, a specialized structured language. (The SL-1 was
> originally written in the SL-1 language.) These switches are very
> successful.
Perhaps some added detail on the DMS design would put a little
perspective on just how much distributed processing there is in a
digital switching system.
The DMS is a fault tolerant, real time, message passing computer
system based on multi-processing peripheral processing modules. It
just happens to be programmed to route data bits that are telephone
calls. But one could very well be programmed to control every nuclear
power plant in the country quite well! And while it did that it could
also provide PBX services for the power company too.
The original DMS "front end" design used a board level CPU (the NT-40)
with a Harvard architecture (8 bit instruction bus and 16 bit data
bus) and a segmented memory model similar to an 8088. The peripheral
modules used 8085 cpu's for everything: tone generators, receivers,
senders, test equipment, trunk controllers ... everything.
Then came XPM's (eXperimental Peripheral Modules) which are 68000
based units. Generally each XPM (such as a DTC, or Digital Trunk
Controller) has two units which operate in sync with one being active
and the other standby and each unit has two 68000 cpu's. Such a DTC
handles 480 trunks. Various other kinds of XPM are designed to do
everything from lines to SS7 processing. A small switch might have 20
of the older PM's and ten of the XPM's, a huge switch might have near
a hundred of each.
Originally the 8085 PM's had 64K of RAM, and the first XPM's had
something like 384Kb. Currently the XPM's have something like 2Mb of
RAM. That sounds reasonable ... but there is more to it than meets
the eye! Ten or twelve years ago it took a couple minutes to reload
the memory in a PM (from tape, but only 64Kb). And the new XPM's took
10-15 minutes from disk. That was barely within an operating
company's ability to live with it! The first time I heard about the
development of a large memory version of the XPM it was in terms of
how much trouble they were having re-designing everything to bring the
load time down to a reasonable figure. And the worst horror story
I've ever heard about a telco "cut" to something new relates to how
NTI thought a switch with about 70 large memory model XPM's could be
"crash loaded" all at once in an hour ... and *many* hours later the
last XPM finally came back on line. (The disk buffers were too
small ...)
The NT-40 front end is being phased out by NTI, and the SuperNode
front ends using the 68020/30 cpu's are the standard. There are
actually two front end units running in sync with each other and
comparing notes ... and each has two cpu's. And there are two
characteristics that most computer people will find unusual. One is
that *all* programs are in memory. Programs are not executed from
disk (Some testing tools are not permanently loaded, but any software
the operating company is expected to use is permanently loaded in
memory.) The other unusual thing is that when you login on a terminal
you are a background process. We usually associate whatever you can
see output from on a terminal as being in the foreground, but on a
switch call processing is the foreground.
The operating system is a real-time message passing system based on
distributed processing in multiple peripheral modules. The front end
cpu does virtually none of the "switching"; it is the database
controller for the configuration tables, the state machine data for
calls, and the state machine data for devices; and it is a
communications center for the peripheral modules.
The hardware is dual redundant and fault tolerant. To date I've only
seen one instance where a hardware failure caused the entire switch to
fail. That was a slow failure of the power supply for the
communications channels between the two front end units. The slow
failure caused modules powered by the unit to transmit garble in both
directions and each front end unit thought the other had lost sanity.
The machine re-booted from scratch on both sides and came back up in
three minutes. (No calls were lost, but no new ones could be setup in
those three minutes.)
It was suggested (in the article that Fred was responding to) that
modern switches are NOT distributed, which isn't really the case. The
"control" is centralized in one compute module, but the work of moving
data bits from one line or trunk to another is totally distributed.
In fact the front end computer can be re-booted without loosing the
existing calls (or the AMA data records relating to them).
It was also suggested that a lack of distributed processing was a
cause for lower reliability than was true with mechanical switching
systems ... I doubt it. My bet is at least one order of magnitude in
the other direction! (I know of one example where a toll switch room
once had more than 20 technicians working the evening shift, and one
day I called there and found only a janitor ... with a number to call
for AT&T's control center in Denver, many hundreds of miles away.
That can't be an unreliable switch ... :-)
Floyd
floyd@ims.alaska.edu A guest on the Institute of Marine Science computer
Salcha, Alaska system at the University of Alaska at Fairbanks.
From: Floyd Davidson <floydd@chinet.chi.il.us>
Subject: Re: Unix on Switches
Organization: Chinet - Public access UNIX
Date: Sat, 7 Sep 1991 08:06:49 GMT
In article <telecom11.706.3@eecs.nwu.edu> HOEQUIST@bnr.ca (C.A.)
writes:
> Brian Crowley asks:
>> I understand that modern CO switches run a software program called a
>> generic which is based on the UNIX system. Just how different is the
>> kernel the switch runs from the kernel which is running my
>> workstation? What sort of interface does the CO technician have to
>> the switch (dumb terminal, graphics terminal, etc.)? Is it possible
>> to bring up a shell on a CO switch? What type of filesystem is
>> typically used?
> I can't say anything about switches other than DMSs; I would guess
> that AT&T binds Unix closely to its stuff (Andy? you out there?). The
Likewise, I only have experience on DMS switchers.
> DMS operating system is proprietary and definitely _not_ Unix or
> Unix-based. However, peripherals do indeed run Unix, on graphics
> terminals, with all the trimmings.
On most DMS switches, which may not be the case in the latest
installations, dumb terminals are pretty much standard. VT-100s are
about it. The peripherals that run UNIX are not used by maintenance
people for most things. I haven't used one yet, but it isn't used,
for example, for trunk or line testing.
> If there is a Unix kernel out there running an average urban CO,
> though, I'll bet it's got a lot more muscle than anything sitting in a
> workstation. I doubt that a garden-variety kernel could handle the
> demands of a CO switch, particularly the multitasking and real-time
> scheduler needs.
The multitasking, yes. The real-time, not even close. Most
workstations have more CPU power than say an NT-40, or even a
Supernode, but they don't have the i/o either. The wierd thing about
doing things on a DMS is getting used to the concept that the terminal
process is a background process. On normal computers you get used to
what you can see as the foreground, and what you can't as the
background. A switch processes calls in the foreground, and
everything else, including your terminal, is in the background.
And there is a shell on the DMS, but spawning off a separate one is a
bit tricky and cumbersome. Also the shell is very different than
anything on UNIX. It is a little frustrating to move between the two.
The file system on a DMS is based on an IBM tape format, even on the
hard disk (the first DMS-100's didn't have hard disks). It is slow
and very inconvenient, but it does the job it was intended to and has
been consistant since the original DMS came out.
There are a few utility programs available. There is a line editor, a
sort program, a compress program, and others. But they are not
designed to be generally useful. They were designed for specific
applications, and usually do those rather nicely. It is hard to come
up ways of doing normal computer type operations like reformatting
text files and so on. Some of the 'shell scripts' that have been
designed for various purposes are really very imaginative. With UNIX
there are usually several ways to do something and the hard part is
figuring out which is best. On a DMS the trick is to even find a
way.
Floyd
Index
Home
About