«
»

Blog

The only computer science book worth reading twice?

14.05.10 | 6 Comments

I was talk­ing to one of my stu­dents earlier, and lent him a book to read over sum­mer. It was only after he’d left that I real­ised  that — for me at any rate — the book I’d given him is prob­ably the most sem­inal work in the whole of com­puter sci­ence, and cer­tainly the book that’s most influ­enced my career and research interests.

So what’s the book? Struc­ture and inter­pret­a­tion of com­puter pro­grams by Hal Abel­son and Jerry Suss­man (MIT Press. 1984. ISBN 0−262−01077−1), also known as SICP. The book’s still in print, but — even bet­ter — is avail­able online in its entirety.

OK, every­one has their favour­ite book: why’s this one so spe­cial to me? The first reason is the time I first encountered it: in New­castle upon Tyne in the second year of my first degree. I was still find­ing my way in com­puter sci­ence, and this book was a recom­men­ded text after you’d fin­ished the first pro­gram­ming courses. It’s the book that intro­duced me to pro­gram­ming as it could be (rather than pro­gram­ming as it was, in Pas­cal at the time). What I mean by that is that SICP starts out by intro­du­cing the ele­ments of pro­gram­ming — val­ues, names, bind­ing, con­trol and so on — and then runs with them to explore a quite dazzling breadth of issues including:

  • lambda-abstraction and higher-order computation
  • com­plex data struc­tures, includ­ing struc­tures with embed­ded com­pu­ta­tional content
  • mod­u­lar­ity and mutability
  • streams
  • lazy eval­u­ation
  • inter­preter and com­piler construction
  • stor­age man­age­ment, garbage col­lec­tion and vir­tual memory
  • machine code
  • domain-specific lan­guages

…and so forth. The list of con­cepts is bewil­der­ing, and only stays coher­ent because the authors are skilled writers devoted to their craft. But it’s also a remark­able achieve­ment to handle all these con­cepts within a single lan­guage frame­work — Scheme — in such a way that each builds on what’s gone before.

The second reason is the way in which Hal and Jerry view everything as an exer­cise in lan­guage design:

We have also obtained a glimpse of another cru­cial idea about lan­guages and pro­gram design. This is the approach of strat­i­fied design, the notion that a com­plex sys­tem should be struc­tured as a sequence of levels that are described using a sequence of lan­guages. Each level is con­struc­ted by com­bin­ing parts that are regarded as prim­it­ive at that level, and the parts con­struc­ted at each level are used as prim­it­ives at the next level. The lan­guage used at each level of a strat­i­fied design has prim­it­ives, means of com­bin­a­tion, and means of abstrac­tion appro­pri­ate to that level of detail.

Layered abstrac­tion of course is second nature to all com­puter sci­ent­ists. What’s novel in this view is that each level should be pro­gram­mable: that the lay­ers are all about com­pu­ta­tion and trans­form­a­tion, and not simply about hid­ing inform­a­tion. We don’t see that in the main­stream of pro­gram­ming lan­guages, because lay­er­ing doesn’t extend the lan­guage at all: Java is Java from top to bot­tom, with class and lib­rar­ies but no new con­trol struc­tures. If a par­tic­u­lar domain has con­cepts that would bene­fit from ded­ic­ated lan­guage con­structs, that’s just tough. Con­versely (and this is some­thing that very much interests me) if there are con­structs it’d be desir­able not to have in some domain, they can’t be removed. (Within the lan­guage, any­way: Java-ME dumps some cap­ab­il­it­ies in the interests of run­ning on small devices, but that’s not some­thing you can do without re-writing the compiler.)

The third influ­en­tial fea­ture is the clear-sighted view of what com­puter sci­ence is actu­ally about:

The com­puter revolu­tion is a revolu­tion in the way we think and in the way we express what we think. The essence of this change is the emer­gence of what might best be called pro­ced­ural epi­stem­o­logy — the study of the struc­ture of know­ledge from an imper­at­ive point of view, as opposed to the more declar­at­ive point of view taken by clas­sical math­em­at­ical sub­jects. Math­em­at­ics provides a frame­work for deal­ing pre­cisely with notions of “what is.” Com­pu­ta­tion provides a frame­work for deal­ing pre­cisely with notions of “how to.”

I’ve taken a view before about com­puters being the new micro­scopes, opening-up new sci­ence on their own as well as facil­it­at­ing exist­ing approaches. The “how to” aspect of com­puter sci­ence re-appears every­where in this: in describ­ing the beha­viours of sensor net­works that can adapt while con­tinu­ing the reflect the phe­nom­ena they’ve been deployed to sense; in the inter­pret­a­tion of large-scale data mined and mashed-up across the web; in cap­tur­ing sci­entific meth­ods and pro­cesses for auto­ma­tion; and so forth. The rich­ness of these domains mit­ig­ates against pack­aged soft­ware and encour­ages integ­ra­tion through pro­gram­ming lan­guages like R, so that the inter­faces and struc­tures remain “soft” and open to experimentation.

When I looked at my copy, the date I’d writ­ten on the inside was Septem­ber 1988. So a book I bought nearly 22 years ago is still rel­ev­ant. In fact, I’d go fur­ther and say that it’s the only com­puter sci­ence book of that age that I’d hap­pily and use­fully read again without it being just for his­tor­ical interest: the con­tent has barely aged at all. That’s not all that unusual for math­em­at­ics books, but it’s almost unheard of in com­puter sci­ence, where the ideas move so quickly and where much of what’s writ­ten about is eph­em­eral rather than found­a­tional. It goes to show how well SICP nailed the core con­cepts. In this sense, it’s cer­tainly one of the very few  books on com­puter sci­ence that it’s worth read­ing twice (or more). SICP is to com­puter sci­ence what Feynman’s Lec­tures on Phys­ics are to phys­ics: an access­ible dis­til­la­tion of the essence of the sub­ject that’s stood the test of time.

Related Posts:


«
»