I've been working on a software project at home for the past several weeks. Until now, I haven't had the project under source code control. A few days ago, I installed open source Subversion and I'm very happy with the results.
I installed the Windows version of CollabNet Subversion from openCollabNet. The current version is 1.4.2. Installation is painless. You just have to answer a few questions including whether to install Subversion as an add-on to an Apache server or as a standalone server (svnserve). I chose the standalone option and my server was up and running in a matter of minutes.
One tip: You will want to check out the instructions for running svnserve as a Windows service. This lets you can automatically start your Subversion server when you start Windows. You can also use the services UI to stop your server, for example, when you back up your repository.
Although CollabNet Subversion includes a command line client, you can pick from a handful of GUI clients too. I chose Subclipse, an Eclipse Team Provider plug-in. If you've experienced the way Eclipse integrates with CVS, Subclipse will be very familiar. The preceding link brings you to a page with information on Eclipse update sites where you can get the version of Subclipse that's right for your version of Eclipse. And How to Use Subversion with Eclipse is a good tutorial for Subclipse beginners.
Showing posts with label software. Show all posts
Showing posts with label software. Show all posts
Sunday, April 15, 2007
Wednesday, March 21, 2007
John Backus, 1924 - 2007
As reported in an AP story printed in the Washington Post, John Backus died on Saturday at the age of 82. Backus, an IBM Fellow, led the team that developed FORTRAN, the first widely-used, high-level programming language. He also contributed to the development of the Backus-Naur Form (BNF), a language for describing the grammar of programming languages. Software developers, even those who have never used FORTRAN or BNF, owe John Backus a huge debt of gratitude.
As quoted in the AP story:
Thanks.
As quoted in the AP story:
"Much of my work has come from being lazy," Backus told Think, the IBM employee magazine, in 1979. "I didn't like writing programs, and so, when I was working on the IBM 701 (an early computer), writing programs for computing missile trajectories, I started work on [FORTRAN] to make it easier to write programs."Backus claimed to be lazy. In reality, his hard work hastened the development of the software industry we know today.
Thanks.
Monday, September 11, 2006
EclipseWorld Wrap Up
Friday, September 8 was the last day of the EclipseWorld technical conference. I won't go into all the gory details of the sessions I attended. If you read my Day Two report, it was more of the same: Some sessions missed the mark, at least for me. These sessions just weren't technical enough. In fact, in an informal poll, the consensus among people who have attended both conferences is that EclipseCon is the better conference for experienced Eclipse developers.
Of course, the conference did have some good sessions. The best session I attended on Friday was called Contributing Code to Eclipse. How to. Why to. It was conducted by Bjorn Freeman-Benson, Director of the Open Source Process at the Eclipse Foundation. Bjorn described the organization of the Eclipse Foundation, the motives of it's member companies *, and the details of development process.
As Bjorn puts it, one measure of the Eclipse project's success is they have released a version of Eclipse every year, on schedule, for the past seven summers. This is a record of which most commercial software companies would be proud, but Eclipse is an open source project. You might have expected it to disintegrate into anarchy by now.
How has the project managed to be so successful? Bjorn highlighted three points:
* Eclipse Foundation members commit resources to Eclipse projects not out of altruism. They expect to make money on the Eclipse framework.
Of course, the conference did have some good sessions. The best session I attended on Friday was called Contributing Code to Eclipse. How to. Why to. It was conducted by Bjorn Freeman-Benson, Director of the Open Source Process at the Eclipse Foundation. Bjorn described the organization of the Eclipse Foundation, the motives of it's member companies *, and the details of development process.
As Bjorn puts it, one measure of the Eclipse project's success is they have released a version of Eclipse every year, on schedule, for the past seven summers. This is a record of which most commercial software companies would be proud, but Eclipse is an open source project. You might have expected it to disintegrate into anarchy by now.
How has the project managed to be so successful? Bjorn highlighted three points:
- It's a meritocracy. Only the best developers become Eclipse committers. Even a paying foundation member company cannot install one of their developers on a project without approval from the current committers. Generally, you become a committer by contributing patches first.
- The process is completely transparent. Everything from project planning, to staffing, to the actual source code is recorded on the Eclipse web site. This ensures that both committers and the eventual consumers know what is happening with the project.
- Communication is key. This is related to the point about transparency, but Bjorn highlighted it a few times. He said even a great developer will not succeed in an Eclipse project unless he is also a good communicator.
* Eclipse Foundation members commit resources to Eclipse projects not out of altruism. They expect to make money on the Eclipse framework.
Thursday, September 07, 2006
EclipseWorld, Day Two
For me, today was Rich Client Platform (RCP) day at EclipseWorld. I planned on attending four separate sessions on RCP. As it was, I took a small detour to learn about the Java profiling tools in the Test & Performance Tools Platform (TPTP).
The first RCP course was First Steps for Building Eclipse RCP Applications. It was mostly review for me, but I thought it would be good preparation for some of the more advanced courses. The instructor, Dwight Deugo, did a good job describing the basics.
I also attended Fundamentals of RCP UI Programming. This session was a little disappointing. The instructor is a good speaker, but he spent the entire two hours talking about JFace -- and from a very high level. Although he occasionally showed some sample code, it was difficult to follow along. The sample code wasn't reproduced in the presentation materials. Even though I was near the front, I couldn't see the details. Pity the poor folks in the back.
To complete "RCP day", I planned on attending a two part session called Successful Architecture Design for RCP Applications. I expected this to be about factoring your RCP application into features, plugins and fragments; interactions between views and editors; extension points; the job manager; and other hard-core Eclipse concepts. Ten minutes through the first part, I realized I was mistaken. The session was all about migrating three-tier business applications from the web to RCP. Although the instructor has good "Eclipse credentials", this topic didn't really fit with the other sessions in the RCP track. It certainly doesn't interest me personally.
Rather than sit through the second part of Successful Architecture, I decided to switch to Profiling Java Application Behavior with Eclipse TPTP. This was a revelation. The instructor demonstrated the features of the Profiling and Logging perspective which is contributed to Eclipse by TPTP. The profiling views let you track execution flow, execution statistics, memory statistics and object references in a running JVM. You can quickly sort these views to find hot spots like methods that consume lots of CPU cycles. I use Eclipse everyday but I didn't realize the free profiling tools have gotten this good. For a good introduction of the Profiling and Logging perspective, see this tutorial at eclipse.org.
The first RCP course was First Steps for Building Eclipse RCP Applications. It was mostly review for me, but I thought it would be good preparation for some of the more advanced courses. The instructor, Dwight Deugo, did a good job describing the basics.
I also attended Fundamentals of RCP UI Programming. This session was a little disappointing. The instructor is a good speaker, but he spent the entire two hours talking about JFace -- and from a very high level. Although he occasionally showed some sample code, it was difficult to follow along. The sample code wasn't reproduced in the presentation materials. Even though I was near the front, I couldn't see the details. Pity the poor folks in the back.
To complete "RCP day", I planned on attending a two part session called Successful Architecture Design for RCP Applications. I expected this to be about factoring your RCP application into features, plugins and fragments; interactions between views and editors; extension points; the job manager; and other hard-core Eclipse concepts. Ten minutes through the first part, I realized I was mistaken. The session was all about migrating three-tier business applications from the web to RCP. Although the instructor has good "Eclipse credentials", this topic didn't really fit with the other sessions in the RCP track. It certainly doesn't interest me personally.
Rather than sit through the second part of Successful Architecture, I decided to switch to Profiling Java Application Behavior with Eclipse TPTP. This was a revelation. The instructor demonstrated the features of the Profiling and Logging perspective which is contributed to Eclipse by TPTP. The profiling views let you track execution flow, execution statistics, memory statistics and object references in a running JVM. You can quickly sort these views to find hot spots like methods that consume lots of CPU cycles. I use Eclipse everyday but I didn't realize the free profiling tools have gotten this good. For a good introduction of the Profiling and Logging perspective, see this tutorial at eclipse.org.
Wednesday, September 06, 2006
EclipseWorld, Day One
I'm at the EclipseWorld technical conference in Cambridge this week. I was hoping to blog from the conference, but there were technical problems with the conference's wireless network today. There are only a few hundred attendees, but the organizers apparently didn't plan for a large volume of network traffic. This doesn't reflect well on the conference.
Today, we each had to choose one of seven all-day tutorials to attend. I attended the Callisto Boot Camp:
Today, we each had to choose one of seven all-day tutorials to attend. I attended the Callisto Boot Camp:
This tutorial, for experienced Eclipse developers who are currently using Eclipse 3.1, will deep-dive on the new features and innovations in each of the 10 projects that make up the Callisto Simultaneous Release. By attending this class, you'll gain a unique perspective on these projects, not only about the individual new functions that they offer, but how they integrate together to advance the entire Eclipse ecosystem. Everything you want to know about Callisto--you'll find it here.The instructor -- Eclipse Evangelist, Wayne Beaton -- acknowleged from the start it is difficult to do a deep-dive on everything. For my taste, he spent a little too much time on the Web Tools Platform (WTP) and not enough on the C/C++ Development Tools (CDT) or Data Tools Platform (DTP). However, I'm not really criticizing. Mr. Beaton struggled mightily to describe the whole elephant. He didn't quite pull it off, but he demonstrated a solid understanding of a broad set of technologies. It was a worthwhile overview.
Friday, April 21, 2006
Dual Ladder Delusion
There are two kinds of engineers -- those who prefer a career in management and those who prefer to climb the technical ladder. At least that's the conventional wisdom in many large companies. As the authors of a 1986 study concluded, it's really a cruel joke.
The study is called The Dual Ladder: Motivational Solution or Managerial Delusion?. It was authored by Thomas J. Allen and Ralph Katz, both associated with MIT's Sloan School of Management, and was originally published in R&D Management. I wish I could link to an electronic copy on the web, but I can't. A Google search results in many citations, but no copy of the article. *
The authors begin their article with a frank assessment of the dual ladder's effectiveness:
If the dual ladder is often implemented poorly, there must be a reason companies keep the system. Perhaps, if nothing else, it is an effective way to motivate technical talent. To test this theory, Allen and Katz surveyed managers and engineers in "nine major U.S. organizations". They asked:
Although Allen and Katz did not study the software industry specifically, their conclusions are consistent with those of many seasoned software developers. That is: There is an inherent reward in doing interesting work. Even when there is a technical ladder available, many developers find more satisfaction in working on challenging projects than in climbing the ladder. The technical ladder is often the predominant rewards system for developers, but as you climb the ladder, you usually design and write less software. Therefore the dual ladder system is aligned neither with most developers' goals nor with the ultimate goal of the company -- to produce and make money on software.
What do you think? Is the dual ladder a good system that is just imperfectly implemented? Is it, like democracy, the worst system "except for all those others that have been tried"? Or is there a much better system out there?
* Update: Here's a copy of the article from MIT's on-line library. This version was published in 1985.
The study is called The Dual Ladder: Motivational Solution or Managerial Delusion?. It was authored by Thomas J. Allen and Ralph Katz, both associated with MIT's Sloan School of Management, and was originally published in R&D Management. I wish I could link to an electronic copy on the web, but I can't. A Google search results in many citations, but no copy of the article. *
The authors begin their article with a frank assessment of the dual ladder's effectiveness:
The problems underlying the dual ladder concept are several ... [One problem is] organizations tend, over time, to diverge from the initial design and intent of the system. For the first few years, the criteria for promotion to the technical ladder may well be followed rigorously, but they gradually become corrupted. The technical ladder often becomes a reward for organizational loyalty rather than technical contribution.
If the dual ladder is often implemented poorly, there must be a reason companies keep the system. Perhaps, if nothing else, it is an effective way to motivate technical talent. To test this theory, Allen and Katz surveyed managers and engineers in "nine major U.S. organizations". They asked:
To what extent would you like your career to be:The 2157 managers and engineers surveyed were asked to rate each of the above choices on a scale of 1 to 7. The results were 32.6% preferred "b", the management ladder, 21.6% preferred "a", the technical ladder, and 45.8% preferred "c" the opportunity to engage in challenging projects. In other words, twice as many engineers were motivated by challenging projects than by promotion up the technical ladder. Furthermore, this tendency toward a preference for challenging projects, irrespective of promotion, increased with age.
- a progression up the technical professional ladder to a higher-level position?
- a progression up the managerial ladder to a higher level position?
- the opportunity to engage in those challenging and exciting research activities and projects with which you are most interested, irrespective of promotion?
Although Allen and Katz did not study the software industry specifically, their conclusions are consistent with those of many seasoned software developers. That is: There is an inherent reward in doing interesting work. Even when there is a technical ladder available, many developers find more satisfaction in working on challenging projects than in climbing the ladder. The technical ladder is often the predominant rewards system for developers, but as you climb the ladder, you usually design and write less software. Therefore the dual ladder system is aligned neither with most developers' goals nor with the ultimate goal of the company -- to produce and make money on software.
What do you think? Is the dual ladder a good system that is just imperfectly implemented? Is it, like democracy, the worst system "except for all those others that have been tried"? Or is there a much better system out there?
* Update: Here's a copy of the article from MIT's on-line library. This version was published in 1985.
Friday, March 17, 2006
The Evolution of Software Design
I think Software Development's Evolution toward Product Design* is an important essay. The author, Danc at Lost Garden, gets a lot of things right. His four distinct eras of software development sound about right to me. I don't quite remember "The Technocrat Era", but I lived through the "Early Business" and "Late Business" eras. I know first hand many products of those eras confounded users' expectations.On the other hand, I think Danc is being unfair when he implies each product of those bygone eras was nothing more than "a pile of poo". His artist's rendition is very funny, but it's still unfair. Danc is also too sanguine about the glories of the "Product Design Era".
Danc seems to believe the key to successful software development is to involve people in berets (artists and designers) early in the project life cycle. He refers to a so called "Production Pipeline" in which the people in berets lay the ground work for pliant programmers. To be fair, Danc doesn't think this will be easy:
Unfortunately, many companies that attempt to adopt a product design philosophy will also fail, despite their best efforts. Cultural change is hard work. To adopt product design you must alter the most basic DNA of the company's values.It's true we need cultural change and he's right it won't be easy, but inertia is not the only problem. In my opinion, the bigger problem is the people in berets don't have all the answers. They certainly don't always agree on the answer.
For example, many designers are orthodox User Centered Design (UCD) disciples. They design products for user personae and insist the software must always adapt to the user. They can cite chapter and verse from the high priests of UCD including Alan Cooper and Don Norman. But Don Norman himself recently broke ranks with UCD orthodoxy. In Human-Centered Design Considered Harmful, Norman dropped some bomb shells:
HCD asserts as a basic tenet that technology adapts to the person. In [Activity-Centered Design], we admit that much of human behavior can be thought of as an adaptation to the powers and limitations of technology. Everything, from the hours we sleep to the way we dress, eat, interact with one another, travel, learn, communicate, play, and relax. Not just the way we do these things, but with whom, when, and the way we are supposed to act, variously called mores, customs, and conventions.
People do adapt to technology. It changes social and family structure. It changes our lives. Activity-Centered Design not only understands this, but might very well exploit it.
And:
Now consider the method employed by the Human-Centered Design community. The emphasis is often upon the person, not the activity. Look at those detailed scenarios and personas: honestly, now, did they really inform your design? Did knowing that the persona is that of a 37 year old, single mother, studying for the MBA at night, really help lay out the control panel or determine the screen layout and, more importantly, to design the appropriate action sequence? Did user modeling, formal or informal, help determine just what technology should be employed?
I think Norman's Activity-Centered Design principles are much saner than strict UCD, but the art of user interaction design is still evolving. The people in berets don't have a silver bullet. It is unlikely they ever will. User interaction design, like software architecture, is hard work. It will be another era or two or three before we get it right even most of the time.
* via Ned
Wednesday, March 01, 2006
Getters, Setters and Object Orientation
As I read Martin Fowler's Getter Eradicator essay, I wondered for a moment what I was missing. As Fowler says:
Fowler's essay refers to an even better essay by Allen Holub called Why Getter and Setter Methods Are Evil. Holub warns the reader about violating encapsulation with getters and setters and then back-peddles. There are some valid uses for getters and setters. For example:
All well and good, but Fowler and Holub have reminded me this is not Object Orientation. I guess it is closer to Service Orientation. I am reluctant to call it that only because there is so much other baggage associated with Service Oriented Architecture. In any case, the point is this: Use of getters and setters can be a bad habit. Although I continue to work on my SPI, I occasionally venture up into the Object Oriented layers above my interface. When I do, I'll be on the lookout for inappropriate getters and setters.
[One] sign of trouble [in OO design] is the Data Class - a class that has only fields and accessors. That's almost always a sign of trouble because it's devoid of behavior. If you see one of those you should always be suspicious. Look for who uses the data and try to see if some of this behavior can be moved into the object. In these cases it can be useful to ask yourself 'can I get rid of this getter?' Even if you can't, asking the question may lead to some good movements of behavior.The problem is my work lately has been full of Data Classes, or what my colleagues and I call Value Objects. A value object is nothing but a bag of properties with getters and setters. A value object is almost devoid of behavior. It is usually passed to or returned by a service that implements the behavior. I found myself wondering, "Is this a bad thing?"
Fowler's essay refers to an even better essay by Allen Holub called Why Getter and Setter Methods Are Evil. Holub warns the reader about violating encapsulation with getters and setters and then back-peddles. There are some valid uses for getters and setters. For example:
The vast majority of OO programs runs on procedural operating systems and talks to procedural databases. The interfaces to these external procedural subsystems are generic by nature. Java Database Connectivity (JDBC) designers don't have a clue about what you'll do with the database, so the class design must be unfocused and highly flexible. Normally, unnecessary flexibility is bad, but in these boundary APIs, the extra flexibility is unavoidable. These boundary-layer classes are loaded with accessor methods simply because the designers have no choice.That perfectly describes my recent work. I have been working on an abstract, highly flexible Service Provider Interface (SPI). Since I can't force reuse of behavior, I have to define transparent value objects and defer the behavior to each service provider.
All well and good, but Fowler and Holub have reminded me this is not Object Orientation. I guess it is closer to Service Orientation. I am reluctant to call it that only because there is so much other baggage associated with Service Oriented Architecture. In any case, the point is this: Use of getters and setters can be a bad habit. Although I continue to work on my SPI, I occasionally venture up into the Object Oriented layers above my interface. When I do, I'll be on the lookout for inappropriate getters and setters.
Tuesday, December 06, 2005
Form Follows Function
For the past several weeks, I have been working on an API at work. The first step was to design the API and have it reviewed by other software architects. I was surprised by how much time we spent debating the mere form of the API. For example, we argued a lot about whether to use interfaces or classes for value objects, but comparatively little time discussing the function of the API. I am not saying the interfaces vs. classes debate is not important, but often the argument was, "By convention, all value objects must be defined as interfaces (or classes)." In other words, it was a case of form for form's sake. The argument was not grounded in the function of the API.Now that I have moved from design to implementation, I am noticing another kind of tension. As developers begin to use the emerging API, they make enhancement requests. Often these requests are to add a method that is outside the scope of the API, or worse, to make an existing method do some additional work that isn't obvious from the method definition. As an example, I've had several requests to make methods aware of the User Interface (UI) context, but this is an API for accessing data and metadata. None of its implementations should introduce side-effects in the UI unless all of them can guarantee the same behavior. I think it is important to keep the API contract as simple as possible, so I generally decline such requests.
It strikes me the answer to both kinds of debates is the same: "Form follows function." This phrase first gained currency in the discipline of building design. The late 19th century architect, Louis Sullivan, and his disciple, Frank Lloyd Wright, were its most famous proponents. The phrase is sometimes misinterpreted as a statement of precedence. In other words, it is interpreted as, "function precedes form," but that's not really the idea. Rather, Sullivan and Wright were reacting against the conventions of the day. They thought it was silly to build Renaissance train stations, Greek Classical post offices, and English Tudor homes. There were against ornamentation for ornamentation's sake. As Wright said, "Form and function should be one, joined in a spiritual union."
"Form follows function" has been applied to lots of other design activities besides building design. I actually haven't heard it used in reference to the design of an API, but I think it makes sense. It carries with it two important ideas. When designing an API, you should:
- Resist unecessary ornamentation. Some conventions are certainly of universal importance, but others should be applied in only some circumstances and some are mere fads. Each convention should be tested against the function of the API. Does it really make sense in this context?
- Make each method as explicit as possible. To improve the usability of your API, give each method a descriptive name and make it do what it says -- no more, no less. Avoid the temptation to have methods cause side-effect in other parts of the system. This is particularily important for APIs that will have multiple implementations.
Tuesday, November 22, 2005
Erich Gamma on Shipping Software
In part five of a series of interviews at Artima Developer, Erich Gamma talks about Eclipse's culture of shipping software. He talks about six week milestones, transparent planning, constantly "eating your own dog food", and the controlled end game. This is all very familiar. It is almost identical to the Notes development culture at Iris Associates.
There are at least two important differences. Notes release cycles were historically much longer than those of Eclipse. And Notes never was an open source project. However, these differences just highlight the importance of the common themes. If I had to pick one, I'd say the most important is "eat your own dog food". There are lots of different strategies for quality assurance. Nothing compares with running your business on the product you are building.
(via Jeff Atwood, at Coding Horror)
There are at least two important differences. Notes release cycles were historically much longer than those of Eclipse. And Notes never was an open source project. However, these differences just highlight the importance of the common themes. If I had to pick one, I'd say the most important is "eat your own dog food". There are lots of different strategies for quality assurance. Nothing compares with running your business on the product you are building.
(via Jeff Atwood, at Coding Horror)
Wednesday, October 26, 2005
Conway's Law
In a 1968 article called How Do Committees Invent?, Melvin Conway minted Conway's Law:
It also suggests how designing the wrong organization can result in gaps in a system design. I know of one client software project that never had a network group although one was desperately needed. The hope must have been that a network layer would emerge out of the shared requirements of many groups, but it didn't happen. Because there was no assigned responsibility for a network layer, each group created a different set of network utilities.
Dr. Conway calls this phenonmenon homomorphism. There is a "structure-preserving relationship" between the organization and the design it produces. This insight is significant in itself, but Dr. Conway also develops some interesting corollaries. One of them is:
While reading How Do Committees Invent?, I experienced several "Ah-ha" moments. Once again, I am amazed something written in 1968 is still relevant today. Follow the preceding link for the full text of Melvin Conway's article.
The truth about Google. I found my way to Conway's Law by reading Don Norman's essay called The Truth about Google's So-called Simplicity. That is also interesting reading.
Organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations.This makes sense to me. A product like Eclipse -- with loosely coupled components and a very small kernel -- can only result from a loose "open source" organization with little central control. On the other hand, an organization with a rigid command structure and central planning is most likely predestined to produce software that is unwieldy and difficult to change.
It also suggests how designing the wrong organization can result in gaps in a system design. I know of one client software project that never had a network group although one was desperately needed. The hope must have been that a network layer would emerge out of the shared requirements of many groups, but it didn't happen. Because there was no assigned responsibility for a network layer, each group created a different set of network utilities.
Dr. Conway calls this phenonmenon homomorphism. There is a "structure-preserving relationship" between the organization and the design it produces. This insight is significant in itself, but Dr. Conway also develops some interesting corollaries. One of them is:
... the structure of the system will reflect the disintegration which has occurred in the design organization.In other words, when two groups in the organization do not communicate well, the components they produce will not communicate well either. This suggests a better way to monitor progress in software development. In addition to monitoring the progress of individual components, we should also be asking are groups X and Y communicating well? If they aren't, it might indicate an impending breakdown in the design.
While reading How Do Committees Invent?, I experienced several "Ah-ha" moments. Once again, I am amazed something written in 1968 is still relevant today. Follow the preceding link for the full text of Melvin Conway's article.
The truth about Google. I found my way to Conway's Law by reading Don Norman's essay called The Truth about Google's So-called Simplicity. That is also interesting reading.
Wednesday, October 12, 2005
Sustaining Irrational Technology Choices

There's another great essay on Hacknot. In A Dozen Ways To Sustain Irrational Technology Selections, the author explains there is a myth about software developers:
External observers often think of programmers as being somewhat cold and emotionless...Those who have watched programmers up close for any length of time will know that this is far from the case. I believe that emotion plays a far larger part in IT decision making than many would be willing to admit. Frequently developers try and disguise the emotive nature of their thinking by retrospectively rationalizing their decisions...
With tongue-in-cheek, the author goes on to list twelve ways to protect your image in the face of a bad technical decision. Here's number ten:
10. Exclude The Technically Informed From The Decision Making Process
As a self-appointed evangelist for your chosen technology, your worst enemy is the voice of reason. The technology's inability to fulfill the promises its vendor makes should be no obstacle to its adoption in your organization - and indeed, it won't be, so long as you can keep those who make the decisions away from those who know about the technology's failings. Let their be no delusion amongst your staff and colleagues that it is management's purview to make these decisions, and the techies' job to implement their decision. Some will try and argue that those who know the technology most intimately (technical staff) are in the best position to judge its value. Assure them that this is not so and that only those with an organizational perspective (management) are in a position to assess the technology's "fit" with the corporate strategy. Allude to unspoken factors that influence the decision to use this technology, but are too sensitive for you to discuss openly (conveniently making that decision unassailable).
In my opinion, it's a good list, but the author missed at least one item. Here's my contribution:
13. Banish the Infidel
When pleading your case to management, lament the fact that those who disagree with you just aren't "team players". Label the dissenters as malcontents. Make it clear it is they, not you, who are being emotional about your technical decision. Of course, you can't literally banish the dissenters -- especially if it is a large group. Who would implement your design? However, you can ostracize the dissenters to the point where their voices are not heard, and here's the silver lining. When your design fails, you can blame the implementers. Your choice of technologies was sound, but the dissenters, perhaps unconsciously, made it fail by suboptimizing the implementation.
What else is missing from the list?
Friday, August 19, 2005
Ethics and Software Development
Mr. Ed has posted another thought provoking essay on Hacknot. The Crooked Timber of Software Development asserts that software development is more of an occupation than a profession:
I suspect Mr. Ed is using a rhetorical flourish to make his point. In any case, I don't agree there is one standard for all professions. Software development, as a profession, is certainly at a much earlier stage in it's evolution than medicine. But I do agree with his main point:
Mr. Ed closes with the story of David Parnas, a software developer who resigned from the Department of Defense over his objections to the Strategic Defense Initiative (SDI). There might be a hint of politics masquerading as ethics in Parnas's story, but you can't argue with his code of ethics:
Good stuff.
The key concept in any profession is that of integrity. It means, quite literally, "unity or wholeness." A profession maintains its integrity by enforcing standards upon its practitioners, ensuring that those representing the profession offer a minimum standard of competence. Viewed from the perspective of a non-practitioner, the profession therefore offers a consistent promise of a certain standard of work, and creates the public expectation of a certain standard of service.In other words, unlike the medical profession, we have no residency system, licensing board, or code of ethics. Therefore software development is not a real profession.
I suspect Mr. Ed is using a rhetorical flourish to make his point. In any case, I don't agree there is one standard for all professions. Software development, as a profession, is certainly at a much earlier stage in it's evolution than medicine. But I do agree with his main point:
If we are ever to make a profession of software development, to move beyond the currently fractured and uncoordinated group of individuals motivated by self-interest, with little or no concern for the reputation or collective future of their occupation, then some fundamental changes in attitude must occur. We must begin to value both personal and professional integrity and demonstrate a strong and unwavering commitment to it in our daily professional lives.
Mr. Ed closes with the story of David Parnas, a software developer who resigned from the Department of Defense over his objections to the Strategic Defense Initiative (SDI). There might be a hint of politics masquerading as ethics in Parnas's story, but you can't argue with his code of ethics:
- I am responsible for my own actions and cannot rely on any external authority to make my decisions for me
- I cannot ignore ethical and moral issues. I must devote some of my energy to deciding whether the task that has been given is of benefit to society.
- I must make sure that I am solving the real problem, not simply providing short-term satisfaction to my supervisor.
Good stuff.
Tuesday, August 16, 2005
Contestants Making Brownie Points
In the early 1970s, the prophet Dr. Brooks wrote this about a major software development project:
If you are a student of human nature, it's no surprise the contestants are still at it. In some organizations, it is all about maximum visibility for minimum effort. The truly frightening thing is how far the tools for self-promotion have evolved since the 1970s. These days contestants wield PowerPoint presentations, electronic mail, and instant messaging in their quest for maximum visibility. There's nothing wrong with Joe Programmer using such tools to communicate, but in some cases, he should be spending more time with his favorite IDE.
Of course, the above quote is from The Mythical Man-Month. Having stated the problem, Dr. Brooks begins framing the solution in the very next sentence:
The above quotes come from Chapter 9, Ten Pounds in a Five-Pound Sack. In this context, Dr. Brooks was describing how to safeguard performance when developing large systems. He might just as well been describing how to safeguard the health of a development organization. Only a dysfunctional organization allows a developer to choose building a better career over building a better product. In my opinion, the only way to build a better career should be by building a better product first.
The project was large enough and management communications poor enough to prompt many members of the team to see themselves as contestants making brownie points, rather than as builders making programming products. Each suboptimized his piece to meet his targets; few stopped to think about the total effect on the customer. This breakdown in orientation and communication is a major hazard for large projects.
If you are a student of human nature, it's no surprise the contestants are still at it. In some organizations, it is all about maximum visibility for minimum effort. The truly frightening thing is how far the tools for self-promotion have evolved since the 1970s. These days contestants wield PowerPoint presentations, electronic mail, and instant messaging in their quest for maximum visibility. There's nothing wrong with Joe Programmer using such tools to communicate, but in some cases, he should be spending more time with his favorite IDE.Of course, the above quote is from The Mythical Man-Month. Having stated the problem, Dr. Brooks begins framing the solution in the very next sentence:
All during implementation, the system architects must maintain continual vigilance to ensure continued system integrity. Beyond this policing mechanism, however, lies the matter of the attitude of the implementers themselves. Fostering a total-system, user-oriented attitude may well be the most important function of the programming manager.In other words, when a software project is in trouble, it is ultimately not the fault of individuals who "suboptimize". It is the architect's job to monitor the integrity of the system and the project manager's job to foster the right attitude. This is a classic system of checks and balances. It may be human nature for developers to minimize effort, but in a healthy organization, the leadership encourages and rewards behavior that results in a better product.
The above quotes come from Chapter 9, Ten Pounds in a Five-Pound Sack. In this context, Dr. Brooks was describing how to safeguard performance when developing large systems. He might just as well been describing how to safeguard the health of a development organization. Only a dysfunctional organization allows a developer to choose building a better career over building a better product. In my opinion, the only way to build a better career should be by building a better product first.
Monday, June 27, 2005
Time to Do Software Right
I've been thinking about two recent posts by Jeff Atwood. In UI is Hard, Jeff cites evidence that UI programming is harder than server-side programming. His recommendation is to think about the UI first.
In my opinion, both the premise and the solution are wrong (or at least they aren't universally correct). UI programming is certainly hard. The UI should never be an afterthought, but server-side programming can be equally hard to get right. You must design for scalability, performance and reliability on the server-side. If you don't, even the most elegant front-end will be broken from the user's perspective.
So it shouldn't be "UI first". Instead it should be "UI and server-side together". And you might need a few iterations to get it right. The problem is iterations take time.
In The Broken Window Theory, Jeff argues we should take the time to fix "broken windows" (bad designs, wrong decisions, poor code). Failure to do so breeds an atmosphere of sloppiness. When a developer see problems throughout the code, he wonders why he should spend the time to do his part right.
I agree with 100% with this analysis, but let's consider the cause of the "broken windows". Is it incompetence or laziness on the part of developers? Sometimes, but more often I think it is lack of time. Compressed schedules are epidemic in the software industry. Because of the schedule, we short-change the design phase of projects, we ignore the need to iterate in the design-development process, and pretend not to see the myriad "broken windows" in our code. After all, we can fix the process and the code "next release".
In The Mythical Man-Month, Dr. Fred Brooks likens good software to good cooking. Brooks quotes the resolute Chef Antoine:
In my opinion, both the premise and the solution are wrong (or at least they aren't universally correct). UI programming is certainly hard. The UI should never be an afterthought, but server-side programming can be equally hard to get right. You must design for scalability, performance and reliability on the server-side. If you don't, even the most elegant front-end will be broken from the user's perspective.
So it shouldn't be "UI first". Instead it should be "UI and server-side together". And you might need a few iterations to get it right. The problem is iterations take time.
In The Broken Window Theory, Jeff argues we should take the time to fix "broken windows" (bad designs, wrong decisions, poor code). Failure to do so breeds an atmosphere of sloppiness. When a developer see problems throughout the code, he wonders why he should spend the time to do his part right.
I agree with 100% with this analysis, but let's consider the cause of the "broken windows". Is it incompetence or laziness on the part of developers? Sometimes, but more often I think it is lack of time. Compressed schedules are epidemic in the software industry. Because of the schedule, we short-change the design phase of projects, we ignore the need to iterate in the design-development process, and pretend not to see the myriad "broken windows" in our code. After all, we can fix the process and the code "next release".
In The Mythical Man-Month, Dr. Fred Brooks likens good software to good cooking. Brooks quotes the resolute Chef Antoine:
Good cooking takes time. If you are made to wait, it is to serve you better, and to please you.If we had more Chef Antoines in the software industry, we might have happier customers.
Friday, June 10, 2005
The Mythical Surgical Team
The Mythical Man-Month is a masterpiece on the subject of software project management. Although he wrote the book in 1972, the author, Dr. Fred Brooks, is still quoted by software developers and managers today. "Plan to throw one away," "Take no small slips," and "Adding manpower to a late software project makes it later," have long since become conventional wisdom. We don't always follow the conventional wisdom, mind you, but we can quote it.It is amazing how relevant The Mythical Man-Month still is, but Chapter 3 struck me at first as quaint if not downright bizarre. The chapter is called "The Surgical Team". In it Dr. Brooks promotes an idea first developed by IBMer Harlan Mills in 1971. The idea is to organize large software development projects into multiple "surgical teams". Each team is headed by a chief programmer, the surgeon, who does most of the delicate work. In a real surgical team, the surgeon does all of the cutting and stitching. He is supported by a staff of specialists with more mundane roles. In the Brooks/Mills scheme, the chief programmer does most of the programming. He has a staff of nine people to take care of mundane tasks.
Brooks goes into a lot of detail about the supporting roles. I won't give all the details, but here is a summary:
- The copilot is the chief programmer's right-hand man. He can do the development work but he has less experience. He often represents the team at meetings and otherwise off-loads the chief programmer from duties other than pure development.
- The administrator manages people, hardware and other resources required by the team.
- The editor is responsible for editing documentation written by the chief programmer.
- Two secretaries -- one each for the administrator and editor.
- The program clerk keeps all the records for the project including source code and documentation.
- The toolsmith builds specialized programming tools to the chief programmer's specifications.
- The tester develops and runs both unit tests and system tests.
- The language lawyer is an expert on the computer languages used by the chief programmer. He provides advice on producing optimal code.
My point is that Brooks's vision has largely been realized, but in a way he didn't predict -- by automation. We haven't hired a support staff for each chief programmer. We have automated most of the surgical team's tasks.
Does that mean each developer is a self-contained surgical team? Unfortunately, the answer is no -- no more than hiring a real surgical team would make me a surgeon. A surgeon is made by training and experience, not by the resources at his disposal.
In The Mythical Man-Month, Brooks establishes at least two central premises before promoting the surgical team concept:
- The main problem with large software development projects is one of communication. The more developers on the project, the bigger the communication problem is.
- There is a huge productivity gap among software developers. The good developers are very, very good. The bad ones are very, very bad.
My advice is to go back to the drawing board. Hire the best developers to do the work. Pair them with more junior developers for the purposes of training and insurance -- not necessarily to do the work. And, at all costs, keep teams as small as possible.
Subscribe to:
Comments (Atom)