There's some old thinking around software testing which sometimes gets in the way of progress. Way back when, you could patch up products extensively after they were released, as long as the initial product looked appealing enough. Programmers believed that most coding errors were incidental, and that software could still run even with a lot of minor bugs. There was also a scepticism held towards software testing, as techniques were still being developed, and it seemed like glitches got through whether checked for them or not, so the expense and the extra time needed to test didn't seem worthwhile.
This is really dated thinking for companies now. There's a massive demand for software products, and there are also loads of companies who are willing to take advantage of that marketplace. Users are greedy for anything new, overloaded with quality products, and highly critical of anything that doesn't come up to scratch. Software testing is a simple precaution against failure in the marketplace, and the damage that glitches can do to both the product and your reputation. It works by sifting through the code, either through tests or reading the functions themselves, and then making a log of all the errors that crop up.
It should be pointed out that the errors are numerous, and while many can be sustained in a system, it's not a good idea to leave them unchecked. Tester report that at times they're handed supposedly 'polished' pieces of software that are riddled with glitches. It's not that the programmers have been necessarily careless; it's just that not all problems show up unless you actually test the software rigorously.
If you don't sift through all these errors there's a good chance that it will lead to glitches. Programmers are right in thinking that a piece of software can handle some errors, but until you test it out it's a real lottery to see how those errors will manifest themselves. They might just slow the software down a little, or they might create a major glitch, even damaging users' computers.
How important testing is to a company will probably vary in terms of what they're producing, but it's never a bad idea to test, and it will always find errors in the code.
This is really dated thinking for companies now. There's a massive demand for software products, and there are also loads of companies who are willing to take advantage of that marketplace. Users are greedy for anything new, overloaded with quality products, and highly critical of anything that doesn't come up to scratch. Software testing is a simple precaution against failure in the marketplace, and the damage that glitches can do to both the product and your reputation. It works by sifting through the code, either through tests or reading the functions themselves, and then making a log of all the errors that crop up.
It should be pointed out that the errors are numerous, and while many can be sustained in a system, it's not a good idea to leave them unchecked. Tester report that at times they're handed supposedly 'polished' pieces of software that are riddled with glitches. It's not that the programmers have been necessarily careless; it's just that not all problems show up unless you actually test the software rigorously.
If you don't sift through all these errors there's a good chance that it will lead to glitches. Programmers are right in thinking that a piece of software can handle some errors, but until you test it out it's a real lottery to see how those errors will manifest themselves. They might just slow the software down a little, or they might create a major glitch, even damaging users' computers.
How important testing is to a company will probably vary in terms of what they're producing, but it's never a bad idea to test, and it will always find errors in the code.
0 comments:
Post a Comment