The September 1 issue of Fortune has an article on multicore processors (I don't think there's a link yet). Basically, it's telling the business folks what we've known for a while: the free lunch provided by Moore's law is over, and no one really knows how to program for these systems.
None of this is news to us. So why am I wasting your time bringing it up again? Well, I want to know who this is really going to affect. If we assume that most software will eventually be delivered over the web, the heavy lifting is going to be done on the server by something like a LAMP stack, and you're going to use JavaScript for the blinky lights. This is going to change how to approach concurrency, and how you take advantage of multicore, if you can.
On the server, a multicore chip is simply going to mean more thread in Apache or more Mongrels. How much to you really win by spawning multiple threads to handle a request though? That's just one more thread waiting for the database query to come back over the network. Likewise on the client. What are multiple threads going to get you other than more resources allocated to waiting on an Ajax call? I guess you could argue that it'll allow the UI to be more responsive, but we already know how to do that.
Before I get flamed, I want to mention that I am in fact aware that there are some areas that really do benefit from the increased horsepower. In particular: games, audio/video, large scale data processing. We already know that each of those can benefit from parallelism. What can your average web application do with those kinds of features though? Or rather, where should I be looking for application domains that take advantage of this new technology rather than simply have technology for technologies sake?
Sunday, August 24, 2008
Tuesday, August 19, 2008
Code Sucks
Most code sucks. Mine included... especially mine... but other people's too. Despite the best of intentions and planning and playing with paper sketches and squiggly lines, all code eventually turns into an impenetrable mess.
You can follow all the rules about writing code and organizing modules and whatnot, but you end up in the same place. The problem is that we don't know the rules to writing software yet. The rules we do have a contradictory. We work with people that don't know the rules and/or don't know about them.
It drives me insane.
Sometimes you can fix parts of the code. Line by line, class by class, you can refactor and test as you go. Or, at least, that's what I thought. Sometimes there's just too much inertia. Your best code doesn't work in the presence of the existing code. Sometimes the code, the project, and the team are just too far gone.
It's a sinking ship... and there's no honor in going down with her. Man the lifeboats and wait to get picked up by another ship.
You can follow all the rules about writing code and organizing modules and whatnot, but you end up in the same place. The problem is that we don't know the rules to writing software yet. The rules we do have a contradictory. We work with people that don't know the rules and/or don't know about them.
It drives me insane.
Sometimes you can fix parts of the code. Line by line, class by class, you can refactor and test as you go. Or, at least, that's what I thought. Sometimes there's just too much inertia. Your best code doesn't work in the presence of the existing code. Sometimes the code, the project, and the team are just too far gone.
It's a sinking ship... and there's no honor in going down with her. Man the lifeboats and wait to get picked up by another ship.
Saturday, August 9, 2008
Its the Software, Stupid
I've had the opportunity to work with high end smartphones for almost my entire career. What I've learned is that the handset manufacturers like to sell based on hardware features while the carriers (ie Verizon and AT&T) like to run their own versions of the software. The result is a package that's never quite satisfying. In most cases you end with a hunk of plastic that, despite grand claims, can't do much more than make phone calls.
Last week, I dumped Verizon for AT&T so I could get an iPhone. Here in Columbus, Verizon has the best network, period. AT&T's coverage is rather spotty. I put up with it though because the iPhone is the first handset that get's the hardware/software package right (translation: the hardware is decent and the software actually works).
Verizon offered us a deal on the LG Voyager to try to get us to stay, but they completely miss the point. No one wants a phone because they look similar on a spec sheet. A smartphone is going to sink or swim based on the strength of its software. And until the rest of the market gets this and stops half-assing their software, the iPhone will win.
Last week, I dumped Verizon for AT&T so I could get an iPhone. Here in Columbus, Verizon has the best network, period. AT&T's coverage is rather spotty. I put up with it though because the iPhone is the first handset that get's the hardware/software package right (translation: the hardware is decent and the software actually works).
Verizon offered us a deal on the LG Voyager to try to get us to stay, but they completely miss the point. No one wants a phone because they look similar on a spec sheet. A smartphone is going to sink or swim based on the strength of its software. And until the rest of the market gets this and stops half-assing their software, the iPhone will win.
Thursday, August 7, 2008
Naked Objects in WPF?
There is a rather niche architectural pattern called naked objects. In this pattern, you build your entire application as an abstract domain model. If you do MVC, this is like building the application using only the M. Your users will then interact with this model directly.
I call this a niche pattern because none of the major frameworks support it. What I want to explore though is how we can fake it in WPF.
If you use strict naked objects, you won't build a user interface. The framework will take care of it for you. Frankly, this kind of scares me. I only imagine the monstrous forms that will come out of this thing.
In WPF, you can define a template for arbitrary classes. What I'd like to try is to define a UI form by dropping domain objects and importing a resource that will act like a skin. That resource will have the templates that define how the properties of my objects will map to UI controls.
This is just a thought I've been kicking around though. One problem I haven't worked out is how to map events on the controls to method calls on my objects without making every single method a routed event handler. I'll have to see if there is a generically capture the events on some controller class and automatically re-map them to method calls.
Why would I want to do this? Well... I've got a couple 2000 line code-behind file's I've inherited that prove, yet again, that you can write spaghetti FORTRAN in any language.
I call this a niche pattern because none of the major frameworks support it. What I want to explore though is how we can fake it in WPF.
If you use strict naked objects, you won't build a user interface. The framework will take care of it for you. Frankly, this kind of scares me. I only imagine the monstrous forms that will come out of this thing.
In WPF, you can define a template for arbitrary classes. What I'd like to try is to define a UI form by dropping domain objects and importing a resource that will act like a skin. That resource will have the templates that define how the properties of my objects will map to UI controls.
This is just a thought I've been kicking around though. One problem I haven't worked out is how to map events on the controls to method calls on my objects without making every single method a routed event handler. I'll have to see if there is a generically capture the events on some controller class and automatically re-map them to method calls.
Why would I want to do this? Well... I've got a couple 2000 line code-behind file's I've inherited that prove, yet again, that you can write spaghetti FORTRAN in any language.
Wednesday, August 6, 2008
A Crazy Idea about Dependency Injection
At the end of my post yesterday, I mentioned the term Dependency Injection. I can't go into too much detail because it's not a concept I've really been able to work with much... yet. My only real exposure is the Google Guice book.
That's not gonna stop me from diving right in though.
Basically, when you use DI, you never use the new operator which means you never explicitly allocate objects. In a modern programming language (where modern implies garbage collection), this give you quite a bit of flexibility to reconfigure your app without changing your business logic. You just configure which concrete classes are mapped to which interface. For instance, you can remap ICurrency from Dollar to Euro in one place.
What's struck me though is the implications this might have for C++. By taking 'new' out of the program logic, we should be able to abstract away most, if not all, of the manual memory management that is so painful.
Instead of allocating an instance object in the constructor and freeing it in the destructor, we inject an already allocated object into the constructor. It is now the DI container's job to allocate and deallocate that object. Finally, assuming that you have a good enough DI framework (is this like a smart enough compiler?), it should take care of that task for you. You just map types.
Am I crazy?
That's not gonna stop me from diving right in though.
Basically, when you use DI, you never use the new operator which means you never explicitly allocate objects. In a modern programming language (where modern implies garbage collection), this give you quite a bit of flexibility to reconfigure your app without changing your business logic. You just configure which concrete classes are mapped to which interface. For instance, you can remap ICurrency from Dollar to Euro in one place.
What's struck me though is the implications this might have for C++. By taking 'new' out of the program logic, we should be able to abstract away most, if not all, of the manual memory management that is so painful.
Instead of allocating an instance object in the constructor and freeing it in the destructor, we inject an already allocated object into the constructor. It is now the DI container's job to allocate and deallocate that object. Finally, assuming that you have a good enough DI framework (is this like a smart enough compiler?), it should take care of that task for you. You just map types.
Am I crazy?
Tuesday, August 5, 2008
What's in your object?
In the ThoughtWorks Anthology, there's an essay by Jeff Bay called Object Calisthenics. Briefly, the essay lays out a set of rules for designing classes that are rather strict and quite different from what you'd normally use to guide your programming. If you do a quick Google, you'll see that this essay has received quite a bit of criticism in the blogosphere. Most people seem to say that the rules are too strict and just get in the way.
If you haven't read the essay, there's an overview here.
I'm applying these rules as much as I can in my day to day development, and I've noticed a few things. First, it's not kind toward legacy code bases. It is especially unkind toward WPF applications. WPF is built almost entirely on .NET properties and practically begs you to break encapsulation. Yuck... but I blame WPF rather than these rules.
Next, it's turned my code inside out. I'm used to grabbing a few objects, getting some data out of them, doing something to that data, and then passing that data to some other method. Usually this is so I can then return a new value. Since I'm not doing gets anymore, I have to send commands to those objects and then let them do some computation on behalf of the caller. It kinda reminds me of the message passing style you do in Erlang.
So what's this gotten me? For one, unit testing is easier. If you use get, you end up with an object that you then have to verify. This object will have state of its own which will likely have to be verified. This isn't a unit test anymore. It's an integration test. Instead, by sending messages, especially with mocks as parameters, you only have one layer of code to test.
Also, Dependency Injection become much more natural. But that's a topic for another post.
If you haven't read the essay, there's an overview here.
I'm applying these rules as much as I can in my day to day development, and I've noticed a few things. First, it's not kind toward legacy code bases. It is especially unkind toward WPF applications. WPF is built almost entirely on .NET properties and practically begs you to break encapsulation. Yuck... but I blame WPF rather than these rules.
Next, it's turned my code inside out. I'm used to grabbing a few objects, getting some data out of them, doing something to that data, and then passing that data to some other method. Usually this is so I can then return a new value. Since I'm not doing gets anymore, I have to send commands to those objects and then let them do some computation on behalf of the caller. It kinda reminds me of the message passing style you do in Erlang.
So what's this gotten me? For one, unit testing is easier. If you use get, you end up with an object that you then have to verify. This object will have state of its own which will likely have to be verified. This isn't a unit test anymore. It's an integration test. Instead, by sending messages, especially with mocks as parameters, you only have one layer of code to test.
Also, Dependency Injection become much more natural. But that's a topic for another post.
Subscribe to:
Posts (Atom)