www.apress.com

2018/4/11

Quotations from Good Habits for Great Coding

By Michael Stueben

Selected advice from the author’s 30-plus years of teaching students to learn programming:

There are four kinds of computer bugs. Most students know only one. Most students think that if their program does what it is supposed to do every time in all reasonable situations, then the program is finished. But that is so untrue. Programs like that have removed bugs only of the first kind: compile errors and logic errors. Errors of the second kind are code-readability or style errors. The worst thing you can say about someone’s code is that to debug it, or modify it, you have to re-write it from scratch. Errors of the third kind are functionality errors. Does the program have all of the features a user desires? The fourth kind of error is the interface error. Is the program intuitive to use? Does the program give the user the information he or she needs at the right time?

**  **  **

Is your program finished before the deadline? If yes,

  1. Did you use step-wise refinement? [If no, then go back and fix.]
  2. Did you refactor when you finished? [If no, then go back and fix.]
  3. Did you write self-documenting code? [If no, then go back and fix.]
  4. Did you limit functions to single tasks? [If no, then go back and fix.]
  5. Did you use the idioms of your language? [If no, then go back and fix.]
  6. Did you use asserts and other error traps? [If no, then go back and fix.]
  7. Did you use vertical alignment where useful? [If no, then go back and fix.]
  8. Did you create labeled and attractive output? [If no, then go back and fix.]
  9. Did you print the time your program took to run? [If no, then go back and fix.]
  10. Did you test the final product well, especially special cases and borderline cases? [If no, then go back and fix.]
  11. Did you test each major function immediately after you wrote it? [If no, don’t do this again. Adopt the habits of professionals.]
  12. Did you avoid writing clever code, doing needless optimizing, and coding for unimportant cases? [If no, don’t do this again. Adopt the habits of professionals.]

** ** **

Both computer scientists and physicists often do what is called non-rigorous mathematics—i.e., mathematical thinking based on analogies and apparent patterns, reasoning that would not be acceptable to a mathematician. This works in computer science because the computer scientist then writes a program that works, based on the math, and thereby confirms (to a degree accepted by some) the mathematics. In a similar way, the physicist builds stuff that works, thereby confirming (to a degree accepted by some) the mathematics. Of course, it would be better to prove the mathematics rigorously, but that often requires symbol-manipulation skills a researcher does not have. And to develop those skills (if even possible) would take time away from research. Most modern research is done with teams, partly because ambitious projects take too much time for one person, but also because too few people have all the skills needed for a big project.

** ** **

Recall that algorithms, along with their instantiation as functions, are evaluated traditionally by THREE criteria:

  1. speed ("Better" is the enemy of "good enough." You might not need super speed.) To confuse matters, functions that are second best on one set of data sometimes turn out to be best on a different set of data.
  2. readability (ease in debugging, modifying, and understanding). Of course, some functions are difficult to understand no matter how they are written.
  3. memory (memory hogs are impractical).

Years ago, as a student, I wrote the quick sort. My code sorted almost all the numbers, but a few were left unsorted. I had used a "<" when I should have used a "<=". Lucky for me that I tested the code with a large number of integers (not floats), in a small range (two-digits), and with a checking routine so that I did not have to visually inspect the output for correctness. If I had not done all of this, then it is unlikely that any of the test cases would have failed. My code only failed when I had duplicate numbers, and sometimes not even then. So imagine that my flawed quick sort was a small part of a large student program. I would have been convinced that my sort was correct. And because of limited time and energy, I might never have re-tested the sort.

My point is this: There is more to evaluating an algorithm than the three criteria stated previously. The ease of understanding an algorithm, its level of difficulty in translating into computer code, and the difficulty of using that code in other programs also are significant properties of an algorithm.

** ** **

An important style of programming is called incremental (aka iterative, aka evolutionary) development. In this style, the programmer first writes the program with only a small subset of the requirements (a “walking skeleton”). Once that is working, a new set of requirements is added. Then, when the improved program is working, another set of requirements is added, etc. Reorganization of design will probably need to be implemented multiple times during the development. Sometimes the evolutionary approach is called the MoSCoW method: Must have, Should have, Could have and Won’t have, but would like to have. Sometimes it is referred to as time boxing.

There are advantages to this approach. A working—admittedly incomplete—program is always finished. This gives a psychological boost to the programmer(s). There is much less stress and uncertainty at the end of the project than is typically the case with large projects. The graphical layout, the interface, and user directions tend to become better due to early user feedback. The early versions of the program become prototypes that guide the final design. Is this the best way to program? Possibly for programs with many features, but most school programs just develop algorithms.

About the Author

Michael Stueben started teaching Fortran at Fairfax High School in Virginia in 1977. Eventually the high school computer science curriculum changed from Fortran to BASIC, Pascal, C, C++, Java, and finally to Python. In the last five years, Stueben taught artificial intelligence at Thomas Jefferson High School for Science and Technology in Alexandria, VA. Along the way, he wrote a regular puzzle column for Discover Magazine, published articles in Mathematics Teacher and Mathematics Magazine, published a book on teaching high school mathematics: Twenty Years Before the Blackboard (Mathematical Association of America, 1998). In 2006 he received a Distinguished High School Mathematics Teaching / Edyth May Sliffe Award from the Mathematical Association of America.

Want more? The above are highlights of the advice you’ll find in Good Habits for Great Coding, the new book that distills the author’s three decades of analyzing his own mistakes, analyzing student mistakes, searching for problems that teach lessons, and searching for simple examples to illustrate complex ideas. Get your copy today!