I think there are a lot of issues that stem from the wrong choice of abstraction - either in a user interface or deeper within a system.
I have noticed that when I'm working with complex systems (e.g. VMware's virtualization stack), their behavior can seem non-deterministic. You do the same thing twice and get different results. It does what it should and then in the next moment it doesn't. Because of this I tend to apply decidedly non-technical adjectives to those systems - moody, temperamental, angry. All this applies if you look at the system top-down, from the interface that is presented to you. The system gives you abstractions to manipulate (in the case of VMware: machines, switches, storage) and if those abstractions don't behave the way they're expected to, then the system seems to be less technical and more emotional.
If you approach the system bottom-up, this should be impossible. Everything should work exactly the way it was designed to. And indeed it does - computers are completely deterministic; they can only do what they're told.
So why do we have this experience of
I suspect the issue lies in the abstractions themselves. When you choose an abstraction, you're making promises to the user, namely that the implementation behind the abstraction is able to work just like the real thing.
Let's make this more concrete. If you get behind the wheel of a car in a driving game, you expect it to have certain features, including acceleration, braking, and steering. No one told you that explicitly. Since the game presented a virtual car to you, you assumed it would work like a real car. If the car moved up and down instead of forwards and backwards, that would surprise you. There's an implicit promise on the part of the game that this abstraction (in this case, it's an abstraction over a physics engine) will work just like what it's pretending to be. In that same way I expect a virtual machine to behave just like a real computer, and a virtual switch to behave just like a real switch. If they do not, I am left with an unfulfilled promise, and something that seems non-deterministic - but in fact, it's just not meeting the interface it's presenting.
There are clearly two paths forward for fixing this problem. The obvious first approach is to be conscious of all the promises that the abstractions are making, and fulfill all of them in the implementation. This can be done, but I suspect it will never be perfect. The other approach is to pick the right abstractions for the backend - and maybe invent something completely different when you want new behavior instead of claiming you can support an interface (with some small differences). VMware took this second approach with their virtual switches. You don't setup VLANs on virtual switches, you set up portgroups. There is no concept ofa portgroup in a real switch - it's a new, completely different thing, and indeed it behaves differently than a VLAN does. It has some extra settings, and though those settings include VLAN tagging, portgroups are orthogonal with VLANs - you can have many portgroups use the same VLAN tag. VMware made the right move here. They wanted different behavior than anything that existed, so created a new concept to encompass that behavior.
So the takeaway is this: if you are going to present an abstraction layer, you must fulfill it completely. Otherwise, you will create confusion. You can do this either by carefully taking stock of all the promises your abstraction makes, or you can change the abstraction - even inventing a new one if you need to. All these tools are in your toolbox when you are designing an interface.
Ever had a system be
moody? What do you think was at fault - was it the chosen abstraction? Any other examples of good and bad abstractions? Let me know in the comments.
Further reading: Joel Spolsky's The Law of Leaky Abstractions, an essay on how every abstraction layer doesn't quite fulfill its promises.