Even if the world doesn’t like your idea, that’s no reason to give up.
You have what seems to be a great technical breakthrough, but other experts in your field disagree with you. Some even think you might have lost your marbles. But you are no Victor Frankenstein with an idea too insane to pursue. What to do?
Some simply rely on patient persuasion that finally yields results because the idea is ahead of practical reality. Fifty years on, it seems odd to us that engineers thought Charles Kao’s confidence in fibre optics was unusual. Many find themselves starved of funding, sapping their ability to change minds before the concept is put on the shelf, perhaps to be rediscovered decades later. A few think spectacle is the way to change minds - an approach taken to its limits in the War of Currents.
What has driven the turnarounds in most cases has been the way that the economics of technology flips the odds. When Intel engineer Ted Hoff thought borrowing elements of the minicomputer architecture might work on a tiny scale for calculators, his own employer thought the concept had a limited market in those minicomputers because hardware circuits were more efficient. Today, microprocessors are the go-to for many projects because they are so much easier and cheaper to use even in circuits that would have used custom hardware just a decade ago.
War of currents
Ever keen to convince a sceptical public of the value of his inventions, Thomas Edison went further than most engineers. As the War of Currents escalated, he quietly encouraged electrician Harold Pitney Brown to demonstrate the dangers of adopting rival George Westinghouse’s plan of using alternating current (AC) to distribute electricity rather than Edison’s DC. And what better way to show that than by electrocuting animals? Brown staged a number of events to show how quickly AC would kill stray cats and dogs and even the occasional horse.
Edison personally had little to do with the most famous animal execution, that of Topsy the elephant in a funfair on Coney Island. But because companies he once ran supplied the electricity and a camera crew came along to document the event, his name is forever linked to the incident.
Edison’s bigger problem was that with the technology of the time DC stood practically no chance. His views on how electricity would be generated and distributed were way too advanced for electromechanical switchgear. As well as backing DC, he thought electricity would be generated locally, which is beginning to come true through microgeneration.
Almost 60 years after the War of Currents, DC received a second chance and a way of infiltrating the AC-dominated grid. The development by ABB engineer Uno Lamm of practical mercury arc valves gave Sweden the option to use DC to carry power to the Baltic island of Gotland in 1954. The rise of large-scale renewable farms is leading to countries using DC to ferry energy across huge distances. China has become one of the main drivers of HVDC work in recent years and DC is becoming the link of choice for interconnecting grids and renewables farms.
Similarly, the transistor, discovered in 1947, is now instrumental in the drive to both high- and low-voltage DC for distribution. Low-voltage DC may become more common in homes to take better advantage of roof-mounted photovoltaics, and data-centre operators are beginning to shift their systems to use DC because they take up less space and are more efficient, especially when backup power is taken into account.
Software often winds up unfit for the original purpose. So why not deal with the problem by not having a clear purpose in the first place but thrashing out what the software should do as the project proceeds?
The Agile Manifesto of 2001 had many critics and suffered plenty of infighting among proponents. But even people working on hardware design and safety-critical systems have come to regard agile development as viable.
The change in attitude has largely come about because of the realisation that the specifications themselves are often wrong. By developing prototypes incrementally, engineers can iron out problems in the specification before they cause real trouble.
If you want to sequence a long strand of DNA, shredding it into tiny little pieces and then letting a supercomputer try to work out which bit is meant to go where sounds insane. But that is the essence of shotgun DNA sequencing. The people running the Human Genome Project in the 1990s were also less than impressed with the concept.
US researchers James Weber and Gene Myers pressed ahead with the idea, based on Myers’ successes in developing software that could do the job on a smaller scale with much simpler bacterial DNA. The computer stood a fighting chance of finding useful overlaps because it was dealing with many copies of the DNA shredded more or less randomly.
University of Washington bioinformatician Phillip Green was polite but scathing in his demolition of the proposal - its chief problem being that it would potentially leave thousands of gaps in what was meant to be the definitive map of the human genome. There are long sequences that consist of the same small group of bases that repeat time and again, making it hard for the computer to find exclusive matches.
But the Human Genome Project itself was running out of steam using more traditional techniques. The job was simply too big for them.
Craig Venter saw an opportunity in shotgun sequencing to beat the official project at its own game. With $300m of private funding versus ten times that allocated to the public projects, Celera Genomics quickly built an initial genome. The company did get a running start by using the completed sequences of the Human Genome Project. But the rapid pace of sequencing at Celera caused the publicly funded project to switch strategies, which accelerated its own work.
Just over 50 years ago Charles Kao stood in front of an audience of IEE members and proposed a way of making glass channel photons over a distance of 10km, if not more. The problem was making a glass of the right level of purity to stop it absorbing most of the light before it had travelled a matter of less than a metre. As Kao acknowledged in his Nobel Prize lecture 43 years later, you only had to turn a sheet of glass on its side to see how non-transparent it can be over any significant distance.
Part of the problem lay in impurities. The other was one of structure. Kai and colleague George Hockham proposed that rather than making a fibre out of one type of glass - the design used up to that point for light pipes and an earlier design put together by Harold Hopkins for a flexible endoscope - the glass core should be clad in an outer layer with a lower refractive index. This would, up to a point, confine the light to the core.
Although the Post Office and the Ministry of Defence took the idea, published in the Proceedings of the IEE in July 1966, seriously enough to launch small research projects, Kao had to spend years convincing the telecoms industry that glass fibre could not only replace copper but surpass it in the ability to transmit data. British Telecom had been “somewhat scathing” about the idea and Bell Laboratories “which could have led the field, took notice only much later”, Kao noted.
A slow and steady campaign to sell the idea finally began to pay off when glassmaker Corning produced a waveguide of fused silica that showed a loss of less than 20dB/km. This was a massive improvement on glass fibre properties. Less than a decade later, waveguides were losing less than 1dB/km. By that time, British Telecommunications had decided to start building a core digital telephone network based on fibre optics.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.