Computer system ‘integrity’ critical to trust factor
I put the phone down and sighed. For the second time, our car was in the garage, and the mechanic called to say he was unable to find the fault, because his diagnostic computer couldn’t determine the problem. Our car’s computer would switch itself to ‘safety mode’ at random, which meant it crawled along at ten miles per hour. Not only was our confidence in the car shaken because of the potential safety issue, the mechanic was unable to fix a problem with something created by humans. Previously anyone could pop the hood of their car and tinker with the engine. These days, mechanics must also understand technology. Modern cars are essentially computers on wheels. Is it possible we have become over-dependent on computer technology? With advantages of computerisation come drawbacks. One major drawback is that once a system becomes untrustworthy, it is difficult to regain that trust. Trust is defined as confidence that a piece of technology functions properly. In computer systems, trust is related to a central tenet of IT security called integrity. Integrity in a computer system means that we know when and why changes have occurred. It implies that from any logical state, we can predict what happens next with very high accuracy, thus the system is reliable. If integrity is lost, the system becomes unstable. When it starts doing things we don’t understand, people lose trust in it. Part of IT security is about implementing the appropriate safeguards to maintain integrity in systems. This extends from business systems to computers that run cars, airplanes, mass transportation and even nuclear power plants. In our rush to benefit from technology (and in the manufacturer’s desire to make a profit), critical steps are sometimes ignored. Verifying that systems work under all conceivable circumstances is impossible, but to ensure system integrity, technology makers must invest the appropriate amount of effort to ensure that their systems are dependable. In IT security, this is called system testing. The concept applies to all technology to varying degrees, but especially when writing custom designed software that is only used by a few companies. Generally, the more users of a technology that there are, the more likely it is that technology flaws will be eliminated over time. In the case of small scale technology with limited testing, such as custom software, integrity becomes very important. If a company’s systems integrity is compromised, it’s possible that company may never recover. When designing a computer system, there are ways to ensure system integrity. One of the best is to ensure that the appropriate safeguards are built into the design from project conception. For example, make certain that safeguards can’t be bypassed in the finished product; eliminate back door access to systems; and have a structured approach to design and future system changes. When system changes are made, ensure that there is an appropriate level of oversight so that inferior work is detected, or that potentially dangerous shortcuts or ‘malicious’ system changes don’t slip in unnoticed. When technology goes wrong, it can be spectacularly wrong. On New Year’s Eve this year, guests at a hotel in Denver, US found themselves locked out of their rooms at the stroke of midnight. Apart from the compensation costs, damage to reputation, and international headlines generated for the hotel, some of the guests resorted to violence against each other. Thus far, the common explanation is that programmers took shortcuts fixing a Y2K bug, though the hotel hasn’t officially commented. In Bermuda, our banks are moving away from manual cheque processing, replacing it with swipe cards and electronic transfers. In the not too distant future, we may no longer use cash. It will be interesting to see what happens when things go wrong then.
Steven Hardy is the technology security officer for a local insurance company. His views are his own and do not necessarily represent the views of his employer.