I know what you're thinking. Well, no, I don't. But I do think I have a pretty good idea: “here's another guy that is saying the same old thing about how to write software”, and sincerely I'm not. If I would be, the title of this writing would've been How to Write an Application Successfully. This is not a guide to software programming, or to learn how to code. It is basically what to do before that.
The main topic here is: plan ahead, really ahead. I jolt at the idea of seeing the developer wanting to do a program that has already been done many, many times before (how many iTunes Cover Art displayers are there already?). So, before you do anything, before you even begin thinking of a line of code, think of the purpose of your new program. I know this is very The-Matrix-y, but bare with me, as I haven't seen this topic talked about as much as I would like to in Software Development forums.
A program has to fill a need; it may be a need that nobody has thought of before as needed to be satisfied (a car really doesn't need leather sitting, but when given the option, people usually welcome it). This can take ages to find, but it's what will give life to your program; is its soul, and if you don't provide it with a clear purpose of what you want it to accomplish, it will die a very painful death. Why? Because of our nemesis: enter confusion. Confusion is, in my opinion, the main reason for program deaths in the last 10 years. Users want ease of use as well as functionality, and if they don't get both, they'll look somewhere else or even may start to write their own application. In reality, because of this, even though the main purpose of your program may have been already used for making other programs, the sole fact that your program may be easier to use than others will gain it a lot of popularity. Look at, for example, the iTunes Store: they have sold over 3,000 million songs, and it still gaining popularity, all because of the ease of use of buying a song and put it onto an iPod and non-obstrusive DRM, regardless of the fact that there are other ways of obtaining that same song for free. It was planned from the user point-of-view, and was done with ease-of-use in mind.
So, the key ingredient (in my opinion, other than functionality; and the jury is still out on that one) is ease of use. I want to point out that this doesn't involve eye candy (shiny buttons, transparent backgrounds, and cool transitions); Mac OS X Developers love to throw this stuff around a lot, and, while I do appreciate it, I've seen that sometimes it actually adds to the confusion. Ease of use is just what it sounds like: that is easy to understand how it works, what it does, and what needs to be done to make it work. The latter is a very important subject: I have helped debug many different types of applications, and the top thing in the list of questions that I always ask the developer that are almost always never answered is “What is that you specifically want the user to do to use your application?”. If they do answer it, it's usually in the way of “I'll leave that to the end. I want it to work first.”. Well, it sounds logical, from a developer's point-of-view, but what happens then is that you've alienated the user experience from your thought process completely, potentially making your program clunky and difficult to understand. Not all users are developers, and even fewer think like one. It is important to mention that a lot of applications today (e.g. Microsoft Word, the whole of GNU\Linux) were developed like this (functionality first, rest later) and have gotten very popular, so I'm not saying that this method is wrong: it has worked on some occasions. However, do remember that there are countless seminars and workshops out there that teach how to use most of those programs, so their popularity cannot be attributed by their ease of use. Interesting side-note: the popularity of Linux drastically went up just after an understandable and easy-to-use graphical user interface was added to the mix.
If you're still around, you're probably asking “Ok, then, so how do I make an easy-to-use application?” This is where the planning way ahead pays off. After you've decided what is it that your application will do, sit down (if its possible, with a graphic designer) and design your graphical user interface first; every window, every word, every instruction that is going to appear. Show it to people you trust that don't have a developer background, ask them to do a certain task and see if they can figure out how to do it just by seeing the application interface sketches. Is silly, I know, but the information you get from this stage is golden because you're already debugging your software without having to write code. A lot of major changes done to applications occur during the testing stage with people outside the software company (known as the Beta stage), and usually they are changes to the user interface, and sometimes these changes imply a major change of the functionality of the program, which means that you'd be working backwards. If you do it beforehand, a lot of these major changes will be taken care of before the developing stage, when you start to actually write code. Now, doing this will also mean that you'll be working in an uncomfortable position: going this route, it is very common that the decisions made in the designing stage make the developing harder, longer, and more frustrating. This is because your developer mentality is telling you that if the whiny little user would learn to use your program a way that you know would be easier to code, you wouldn't need to be spending an extra hour everyday working around it. However, it is important to keep remembering that in this arena the user is not obligated to work hard, you are; and that this hard work will pay off.
By the way, we're still not in the developing stage (that bit was just a preamble of what's to come). After the designing stage comes the pre-developing stage, in which every part of the design will inspire a part of your code. Every button and menu item becomes a module of code that can be called at from every part of your code. In this day in age, most high-level languages (like C#, Java, etc.) are object-oriented; follow that path, extrapolate it. This actually makes the developing stage easier to delegate, which in a software company of more than 5 people is very important to do. When you have planned all your modules, order them up in priority, assign people to each of them, a time frame for each, and start coding. By this time, you should have a pretty good idea of when a first draft of you application should be finished.
When it's done, and you've tested it yourself, go into the Beta stage. I know we kind of already done that before (designing stage), but it's important to reiterate the information you've used; besides, no programmer is perfect and bugs will come about, and this stage will shine light on many of them. If there are any bugs, finding where they're hidden is simple as you now know what module is connected to what part of the application: if the user clicked a certain area and caused a bug, it's just a matter of checking which module is connected to that area to get an initial track of the bug's whereabouts, following it to its home, and squashing it. I know I'm oversimplifying the process, but you have to remember that one of the hardest parts of debugging a program is to find where the bug begins: that is why every time you hand in your computer to a technician because of a problem it has, and a week later it's returned to you without any fixing because “The problem could not be reproduced”. This is also important for when you start adding new features that your users want, as it would be easy to do so because now your program is modularized and adding in to it is easier then it would've been otherwise. Obviously, this also applies when after you publish it and still getting bug reports and feedback on how to improve it: fixing or modifying your application will be more straightforward.
In any case, from then on, the way you publish you application, how much you charge for it, if it's open-source or not, etc. is completely up to you, but do understand that it will also dictate in some degree how popular it may become. I won't get into the details of how you should go about this as it is a completely new subject unto itself, but I will say this: most donationware/open-source programs out there have become increasingly popular and self-paid for. I'm not measuring success by the amount of money you get from the project, I'm measuring it by the amount of people that are using it. And, frankly, I think this is the best way of measuring it, and the only way of enticing people to use it is to make it understandable and easy to use.
All of this can also be applied to any kind of application, like websites and even security systems. To quote Bud Tribble, Vice President of Software Technology in Apple Inc:
We spend a lot of time making the security features easy to use for our users. [...] As a result our users keep their systems up-to-date. [...] We paid attention to ease-of-use.
[...] Our security principles are actually very simple:
- Good security starts with design, not something you slap on.
- Good security is easy to use, security that is not easy to use does not get used.
- Good security continues to improve, it's not a one time deal, it's not a one shot, it's something that we are continually paying attention to with every release.
Remember that this a company with an operating system that hasn't had any known virus out in the wild (another subject unto itself), so their impressions on this subject are very relevant.
I'm always looking for a good debugging project, I love answering questions about this subject (it makes me feel smarter than I really am), and I love debating about this and other topics. If you have any questions or comments about this, feel free to post them here and I will make sure of answering all of them.