0

Being involved with the IT industry for over the past two decades, I pondered over what great technologies are now rendered obsolete, even though were superior to competing technologies. Their demise was usually brought about through the mass adoption of an inferior alternative which is, more often than not, less expensive.

So this article is perhaps a way of me saying a last goodbye to these technologies:

Plasma displays

Large Plasma DisplayWhen Panasonic announced in October 2013 that it was exiting the Plasma market, it was the end of an era. I am sure some manufacturers may continue to produce Plasma displays over the next few years, however Panasonic was the leader and champion of Plasma display technologies. Their focus is now on LED and OLED display technology, where OLED technology will eventually be superior to Plasma.

Plasma displays are superior to LCD/LED displays. LCD displays have being playing a game of ‘catch-up’ for quite some time. Usually the specifications of an LCD display would describe by how much the inherent issues with the technology had been corrected rather than describe how good the product is. I am sure we all remember early flat panel TV’s ghosting as players run up and down a pitch during a football broadcast?

For reasons unknown to me I found that the majority of my customers had a perception that LCD/LED TV’s were better than Plasma. I would give them at least 4 reasons why Plasma displays are better:

  • It does pure black as a ‘colour’. No light comes off of true black areas on a Plasma. An LCD/LED display always have a light source turned on behind the LCD panel, and thus it is impossible for it to generate a pure black. This might not sound like a big deal, but there are several shades of grey leading up to black, in the case of an LCD display black would be a shade of grey which would effect the overall picture quality.
  • The contrast ratio on Plasma displays is measured in millions, on LCD’s it is measured in 10’s of thousands.
  • The response time on a Plasma is measured in 100th’s of a millisecond, on LCD’s it is measured in millseconds.
  • The operational life span of an LCD/LED display is often a quarter that of a Plasma display

Plasma had a reputation for screen burn-in, this in reality is a problem for all displays, regardless of their technology (except OLED).

Plasma also had a reputation of being less energy efficient, this is possibly true, but only very recently. Plasma’s energy consumption is highly variable depending on the content being viewed, and thus could potentially consume less energy than a similar sized LCD/LED display. Plasma displays use more energy when display bright imagery, and use less when displaying darker imagery. LCD/LED displays consume energy at a constant rate regardless of the content being displayed.

The rise of LED/LCD displays has resulted in development of thinner and lighter displays, which is one significant factor that has brought about the demise of the Plasma display.

You can still purchase a Plasma monitor or TV, but when they are gone, they are gone forever. I suspect that Panasonic will stock their flagship professional monitors for select customers such as broadcasters, but as they have just mothballed their last Plasma factory in March 2014, this really is an end of an era.

Firewire

FirewireFirewire is an ultra-fast data port used mainly for connecting storage devices to computers. It was streaks ahead of USB in terms of transfer speeds, and on some ports it was able to deliver more power than USB.

USB has pretty much wiped out the need for Firewire, despite the fact that USB has been playing catch up with the transfer speeds possible over Firewire. Recently USB 3.0 has started appearing on PC’s and laptops, pretty much ending the transfer speed arguments against USB. USB, despite its inferiority, has become ubiquitous it features on all manner devices including mobile phones, TV sets, audio devices and printers.

There is still hardware being manufactured with Firewire ports, but this is purely to support those who have invested heavily in Firewire technology over the past 2 decades.

I remember, not too fondly, when USB first arrived and Windows 95 promised us the arrival of ‘Plug ‘n Play’. Oh the joys of watching Windows crash when you plug in a printer, or unplug it for that matter. I remember cursing HP when they started releasing printers with a USB port and no Centronics parallel printer port. As USB began to feature on more and more devices and the software supporting it became more stable, my hang-ups over the technology began to fade. By the time USB 2.0 arrived, it had matured in to a stable and widely supported standard.

Apple adopted Intel x86 CPU’s, the fate of Firewire was pretty much sealed, although Apple still do feature Firewire on some of their products, but it is likely it will phased out completely as they update their iMac and Macbook offerings.

SCSI

SCSI RAIDSCSI, pronounced ‘scuzzy’, what’s not to like about a technology called that?

SCSI was primarily used as a means for connecting hard disks to computer systems, but it also supported other devices such as scanners. SCSI featured in file servers and as standard in early Macintosh computers.

For a long time SCSI had much higher transfer speeds compared to other data buses, such as the PC standard IDE. I had the further advantage of being able to support more devices on a single bus. When SCSI started, you could connect up to 7 devices, usually hard disks, on one cable, whereas IDE would only support two devices. The support for multiple drives on a single controller made SCSI the natural choice for server disk RAIDs, and later for Storage Area Networks (SAN).

With the launch SATA and the rise in popularity of USB, the advantages SCSI offered slowly became eclipsed by the lower costs associated with SATA. SCSI was (and still is) quite expensive and SATA isn’t, thus hardware manufacturers started making NAS and other storage technologies based on SATA, bringing about the demise of SCSI, at least for the layman. Nowadays the average PC motherboard will support at least 4 SATA drives, and will have RAID capabilities.

Variations of SCSI is still in use in very large data storage installations, known as Fibre Channel, which is based on SCSI principles, but this technology is really for the realm of the large Enterprise, and even then, I would say that its days are numbered as it is incredibly expensive, whereas the SATA based alternatives are not.

 

Token ring

Token Ring NetworkThis is a network technology adopted by IBM and was used almost exclusively in organisations that built their IT divisions on IBM kit. It was fast, but more importantly, it was able to handle very high volumes of traffic without becoming congested or latent.

Ethernet, which is used practically by everyone nowadays, isn’t as adept at handling or managing large volumes of network traffic. Without going into the technical differences between Token Ring and Ethernet, the best analogy I can give on how the two differ is to imagine driving your car in two very different countries. Imagine Token Ring is a Western European country, whilst there is congestion on the roads, traffic does move, because there are road signs and traffic signals to ease congestion and prevent accidents – most people on the road understand the rules and there are strong disincentives for not driving properly. In comparison, Ethernet is somewhere like India, where there are few road rules and everyone drives pretty much the way they want to, because they take a fatalistic approach to driving – if it is your time to die, it is predestined, thus you don’t really need to pay due car and attention to the traffic around you; the result is lots of collisions.

Ethernet has become ubiquitous purely because it is substantially less expensive than Token ring, and in certain instances is less complicated. Over time network hardware has become cheaper and cheaper, and now a significant proportion of us have home networks built on Ethernet technology. Development on Token Ring ceased a long time ago, and thus, despite the chaos in how Ethernet manages network traffic, it has become increasingly faster and network hardware has become better at managing or preventing collisions.


0

Nexans CAT7A 1200MHz network cableThis short article is our summation of Nexans‘ 40Gb/s (40G) standardisation update bulletin, which reports on the progress of the standardisation of 40Gb/s network speeds. 40Gb/s network speeds is the generation of networking standards for copper-based Ethernet networks, which will then be followed by 100Gb/s; at the time of writing,  there is no formal standard for 40Gb/s networking, where network hardware manufacturers still face the challenge of achieving 40Gb/s speeds.

 

10 Gb/s

Currently 10 GB/s (10G) is the fastest speed available for copper-based Ethernet networks.  CAT6A cabling systems are accepted to be the minimum standard required to run 10 Gb/s networks.

A number of 10Gb/s hardware products that claim they can run on CAT6 cabling systems, however they do so unreliably, and certainly do not always operate in an optimum fashion. It may be possible to run 10Gb/s reliably on CAT6 if the cable length is fairly short.

CAT6A is a shielded cable with performance to 500MHz, in comparison, CAT6 is typically unshielded and provides performance to 250MHz.

 

40Gb/s standardisation update

 

The noteworthy points from the update are:

  • Category 8 cabling standards will be defined in 2013/2014 by TIA, which is fast tracking this definition.
  • Category6A (CAT6A) network cabling will (likely) not be used for the 40Gb/s standard. It will require very complex network hardware to signal at these speeds. Further more than one CAT6A will be required to run at these speeds.
  • Cat6A or Class EA cabling has insufficient bandwidth even over short distances such as 30m to support 40G
  • Cat7A with its “double frequency range” of 1000MHz could be used for a 40G application.
  • The Cat7A system’s extended frequency range would allow a simplified encoding which might be easier to implement for chip designers.
  • The biggest news for copper cabling is that the 100m distance support has found its end. No cabling system is intended to run 40GBase-T with a link length above the new 30m limit. This new length limitation should be respected in any new data centre layout from now on.
  • ISO describe two new sets of channel performance specifications, both specified up to 2 GHz.
    • The first (Channel I) is similar to the TIA Cat.8 approach.
    • The second (Channel II) is based on an improved version of ISO Class FA and shows a significantly better Signal-to-Noise Ratio.
  • Currently several new specifications to support 40GBase-T are being studied and three of them are based on Cat7A components. Cat.8 or Channel I systems are not on the market yet and therefore cannot be purchased. Cat7A systems even with extended frequency support are available from certain vendors.

 

Nexans and ISO/IEC

 

Nexans publish bulletins on the progress of the standardisation of 40Gb/s. Nexans, as cabling manufacturers, are contributors to the formal development and definition of new networking standards to support faster network speeds in IT networks.

Nexans represent cabling manufacturers, who need to develop cables and connectors that will conform to the specifications required by the next generation of network hardware manufacturers, including switches, routers and network adaptors.

Nexans work with ISO/IEC to formally define cabling standards (categories) for the next generation of networks and their high expected speeds.

 

©Tea London 2004 - 2013