According to the 2011 Verizon Payment Card Industry Compliance Report, requirement 11 - "Regularly test security systems and processes" - is the one least met, so I thought I would dedicate a few newsletters to this subject, starting with the definition and source of vulnerabilities.
The term "vulnerabilities" is often used in the PCI DSS standard to mean the following (per the definition given by the Council):
Flaws or weaknesses which, if exploited, may result in an intentional or unintentional compromise of a system.
Let’s illustrate this by taking our body and soul as the system.
- As a first example, imagine that I’m standing in front of you holding in my hand a test tube containing an explosive
product. I warn you to move carefully because this product is movement sensitive. Ahh I can see the fear in your eyes... Suddenly I shake my hand…nothing happens. I explain to you that for this product to be so reactive one needs to add a drop of a reagent. Now let’s imagine that while I’m taking to you someone does so behind my back. What would have happened when I shook the test tube? …Yes, indeed…Sorry, it wasn’t my fault!
- As a second example, let’s analyze the following scenario: A car fatally hits you while you are quietly crossing the street. Here you are the system. What could have caused this awful scenario? Bad luck maybe? There are multiple factors that could have lead to this scenario. You simply crossed without paying attention; you had your iPod on; the car driver didn’t see you, you had a bad day and you were deep in thought; you are blind, deaf or both. The environment is also playing a role: Crossing a city boulevard during the peak hours is quite different to crossing a country street on a Sunday morning. The factors above are the weaknesses or vulnerabilities that increase the probability of occurrence of this scenario.
Vulnerabilities in information systems
The world isn’t perfect and certainly not as pertains to information technology. There are a variety of vulnerabilities across information systems - including computers, network systems, operating systems, and software applications – that may originated from vendors, system administration activities, or user activities:
- Vendor-originated: this includes software bugs, vulnerable services, insecure default configurations, and web application vulnerabilities.
- System administration-originated: this includes incorrect or unauthorised system configuration changes, lack of password protection policies.
- User-originated: this includes sharing directories, opening infected documents, selecting easy guessing password, downloading and installing third party software.
Why aren't bugs fixed before software release?
Bugs are a consequence of the nature of human factors in the programming task. They arise from oversights or mutual misunderstandings made by a software team during specification, design, coding, data entry and documentation.
As computer programs grow more complex, bugs become more common and difficult to fix. Often programmers spend more time and effort finding and fixing bugs than writing new code support. On some projects, more resources can be spent on testing than in developing the program.
There are various reasons for not fixing bugs:
- The developers often don't have time or it is not economical to fix all non-severe bugs.
- The bug could be fixed in a new version.
- The changes to the code required to fix the bug could be large, expensive, or delay finishing the project.
- Even seemingly simple fixes bring the chance of introducing new unknown bugs into the system.
- It's "not a bug". A misunderstanding has arisen between expected and provided behavior.
Given the above, it is often considered impossible to write completely bug-free software of any real complexity. As a consequence software is released with known or unknown bugs.
Is it a problem? Well let’s see in our next newsletter.
- Did you find this article useful?
- What is your feeling about the amount of buggy software on the field?