My very first computer, and why it matters

Alt
It’s never as good as the first time.

Do you remember your first time? Do you think it is important?
Do you think it still influences you somehow?

Well, I do, I do and I do.

A couple of weeks ago, I received my long awaited Recreated ZX Spectrum,
basically a bluetooth keyboard shaped like an ’82 8 bits computer.
This is a piece of memorabilia from my youth, as the ZX Spectrum was my
very first computer.
It gave me an opportunity to ponder what it did taught me.

But first, let’s go through its ~~impressive~~ list of features
– A whopping 49152 bytes of RAM
– A 3.5Mhz Z80 8 bits processor, with an impressive 0.25 IPC
– 16384 bytes ROM with integrated basic interpreter
– A 256*192 colored pixels screen resolution, but color management is tricky
(more on that later)
– A 1500 bps in/out tape interface for persistent storage
– All of that in its signature book sized plastic encasing sporting a timeless
rubber keyboard

Those specs bear little meaning today! It is actually difficult to believe that
you could accomplish anything with those.
But they had an unsurpassable quality: they were engaging and not intimidating!

I had to learn how to enter instructions as it was required even to start a
game.
I could even compare my programs to off the self software as those were
done by one or two guys in a couple of months!

And, I gained some interesting skills:

  1. How to learn a programming language: BASIC at that time. Since then,
    I went through various ASMs, C, C++, Pascal, Ada, Smalltalk,
    some Lisp and Prolog, Java and more recently F#.
  2. I need to be aware of how much memory my programs used.
    Allocating a 100 x 100 int array uses up half available memory.
    And we are talking 16 bits int!
  3. How to read/disassemble others’ programs and learned a lot
    from those. Soon the debugger was my best friend.
  4. How to use non documented functions. Heck, nothing was documented.

Were they really beneficial? As a matter of facts,
I should probably talk about bad habits:

  1. BASIC, I mean, BASIC of all languages. This meant a lot of GOTOs
  2. I mostly learn nasty tricks to shave every single byte whenever possible
  3. I hacked other applications to remove protections or copy algorithms
  4. I created fragile code, depending on undocumented features, including
    undocumented opcodes for the processor

When I started my developer career I was still there. That’s the problem when you
code alone for your own: no feedback.

On one hand, being a self learner and self taught gave me some advantages. I did
read a lot, so I had a lot of book knowledge and knew C++ inside out. Which meant
that I lacked the humility that is needed to learn from feedback and I did
commit a lot of atrocities in the name of cute code cleverness.

Walk of shame

I still have a vivid memory of what I fear is the worst design I have ever
produced
.
The platform was C++ on OS/2 and the product we were working on was
getting close to release and, as an attempt to improve supportability,
I designed an error class.
And I decided to be clever by providing several overloaded assignment operator
(=) to capture the error code, error message and error category. Let’s see

MyError error;

error = 45;
error = "Invalid user id";
error = ErrorType.Security;

// which stands for
error.code = 45;
error.message = "Invalid user id";
error.type = ErrorType.Security;

I mean, come on, is there anyone who could see this as a good idea?

I did at the time, but I definitely no longer do!

So what did I actually learn from my first computer?!
I am afraid it will have to wait for the next part of this post.

Stay tuned

A post about agility, architecture and scales

Newton vs Einstein: it gets physical

Newtonian physics ruled the mechanical world for a couple of centuries. It was simple, elegant, almost intuitive.

Relative pressure

Then, early twentieth century, an obscure swiss patent clerk, (Einstein if that rings a bell) demonstrated that it did not work at larger scale and it needed adjustments. It was a daring theory, somewhat far-fetched but it has been proven right every time since.

Then a couple of decades later, quantum mechanics wreaked havoc at the smaller scale, demonstrating that anything goes! It was counter intuitive, unfathomable and it is still. particles are probabilistic, you have to choose between knowing their location or their speed…

Quantum mechanics rulz

But Newtonian mechanics still rules our day-to-day life, because it is accurate enough to account for our experiences. But serious physicists need either Relativity or Quantum mechanics to dig deeper at non human scale. And to this day, nobody has been able to reconcile quantum physics and relativity.

What about agility?

Agile values are really people centric. Indeed, when I was first introduced to them, I thought this was a great way to reconcile users with software developers. But beyond that, agile inspired methodologies focus on adaptability whereas traditional project management method focus on careful planning.

But careful planning is only as good as what you use to establish your plan, traditional approaches require a lot of up front information to succeed, a requirements that is difficult to fulfill for software development.

Planning, really?

Instead, Agile focuses on getting the best out of the information you have and the central notion of maximizing information before using it trough the notion of ‘last responsible moment’.

There is no question Agile inspired methodologies had great success and help improve software project success rates.

But none of them scale.

Smaller

They do not scale down: agile method will not help you design a faster sort algorithm or a Sudoku solver.

I mean, I am not sure how individual interactions would help there.

There, design principles, previous works, patterns are the tool you will need.

Agile methodologies have no relevance there.

Larger

They do not scale up either: being focus on people is great for small groups, but how to scale them for hundred, thousands or even larger groups? How do you engage C-level stakeholders? How do you make those organizations embrace change?

That is where you need Enterprise Architecture practices. As it is often the case with any tool or methodology, there are many ways you can fail with those. But a key attributes to success is to have the right attitude, being a facilitator.

But describing Enterprise Architecture is beyond the scope of this post.

As a conclusion

Trying to change a large organization/IS using some agile methodology is akin to trying to use quantum physics to describe a car. Yes any matter is made out of sub-atomic particles, but you will simply not succeed because it is too complex.

Understand that Agility and Enterprise Architecture are related in their objectives, but at very different scale.

Adopt both, and use accordingly.

Repeat and succeed.

Mechanical sympathy, introduction

Mechanical Sympathy…Jackie Stewart, 1968

This term has been coined by Jackie Stewart, a famous British race driver. He used it to describe his driving philosophy; he spent countless hours with the engineers and mechanics to get a deep understanding of where lied the limits of the mechanic, allowing him to get close to the edge and get the max out of it and when to let it rest a bit.

This strategy helped him win races when competitors were pushing their car too hard.

Gravity

The thought process is obviously complex, but it often relies on mental modelsThose are abstractions the brain use to understand, analyze and interact with the physical world.

They are built through the brain’s learning process, refining them until they are proven accurate enough to act upon them.

For example, we all have a clear mental model of gravity: if we drop an object, it has an accelerated fall. So, this mental model helps us catch something that may have slip out of our hands. If we train hard, we get better at this, until we have a decent juggling ability.

About reality

Coming back to Jackie Stewart:

every driver (incl. you and me) has a mental model of a car, which helps him steer the car; but he can only spend so much time refining his model, i.e. driving and exploring possible situations. And he relies on a very basic model of the car engine, brakes and stickiness of the tires. That’s why he loses most of his skills if it rains or if there is a mechanical malfunction.

By spending a significant time with technicians, James was able to refine his mental models of the various car components. In the process, he could better anticipate the brakes/engine/steering behavior of his car….

Old technology

In my younger years, I was the proud owner of an Atari ST: I was constantly cracking games’ protections, learning how they were written and coding technical demos of some sort. Those demos required expertise and a deep understanding of the guts of the computer, which mostly meant understanding the MC68000 processor.

Its frequency was 8MHz and instructions took between 2 and 20 or more of those cycles, depending on how many memory accesses were required. Typical cycle count was around 6-8, leading to  (roughly) 1M instructions per second (3 orders of magnitude less than now). So if you wanted a 50 fps demo, your algorithm had to fit in 160,000 clock ticks=> ~20,000 instructions.

One time, I had devised a cute scrolling algorithm – remember, no GPU – but I failed short of reaching the expected smoothness: animation was 25 fps not the 50fps I was aiming for. Meaning I needed more than one screen refresh period to update the image…

Tenacity

Something was fishy there, and I had to understand what and why. Reality was not fitting the theory.

Toying with the parameters of the algorithm (reducing text size if my memory serves me we’ll), I was ultimately able to reach the holy grail of smoothness. But my algorithm was now significantly below the 160k cycles barrier, on the paper at least, so I should have reached 50 fps. 

Epiphany

Then it dawned on me and I immediately did write a quick micro benchmark to assess the hypothesis.

Bang!

Actual instructions’ cycle count was always a multiple of four, rounded up (of course). My theory was that the shifter used half of the memory access slots for display purposes (actually, this is a bit trickier than I though). Atari 520 ST Motherboard

I adjusted for that, which meant rewriting part of my algorithm and lowering my ambitions by reducing the amount of moved pixels. And now my scrolling was running smoothly at 50 fps.

Adaptability

My mental model had to be adjusted to match the reality of the hardware. It was an indispensable step to reach my objectives, which definitely performance oriented. But after this failure, I was able to predictably reach 50 fps when needed. I was basically a better demo coder than I was before.

Temporary

In this post, I tried to give you a brief introduction into mechanical sympathy and mental models. I also took the opportunity to brag about my past minor successes.

Doing this, I expected you to start pondering

  • how good is my mental model of the hardware I am working on?
  • are there any signs that I am wrong?
  • can I find some?
  • and foremost, does it matter?

In the next post, I will dig into the models of various parts of a PC and relates this to actual performance impact.

API design in practice: NFluent

A good API should allow developer to fall into the pit of success.

I use this sentence as a mantra whenever I am engaged in an API design exercise, which basically starts when writing my first tests

Selecting the next test is an act of design

In this post, I will give you food for thoughts on the design of APIs, using NFluent as an example. I will also provide some details on the genesis of the library.

The vision

Thomas and myself had the opportunity to assist to a great live coding session  demonstrating refactoring practices, based on the GildedRose kata, performed by David Gageot. At the end of the BBL, Thomas was pretty amazed by the power of Fest Assertions library which was heavily used during the demo. A few weeks later, he committed to build a similar library for the .Net world…

The design

Entry pointUgly APIs are a menace

A few months afterward, I joined the project. At that stage it was close to a first stable release; Thomas and myself engaged on a weeks long discussion, both online and at the office, about what should be the entry point of the library. This is a dear topic to me and I engaged in it passionately: I had wholeheartedly joined the vision and selecting the good entry point was key for me.

A library with a sub par API has no future. I do not care how help it can provide, if the APIs get in the way of getting results, I might as well write it myself.

The form

What should be the form of the API? The mission statement was clear: build a fluent assertion library, allowing check to be expressed in a form close to natural language. Some fluent assertions library, FluentAssertions and Should, rely on extension methods applied to the system under test. For example

...
42.Should().BeEqualTo(42);

myCounter.Should().BeGreaterThan(100);
...

Good isn’t it? From a fluent perspective, yes. But not that good from a TDD perspective. First of all, the SUT must exist and have a type so you can type the assertion code. Supporting and evangelizing TDD is part of the NFluent’s mission statement, so this approach seems counterproductive.

It is also that it is difficult to quickly identify assertions within the test code, but being fluent to write and fluent to read is a key part of NFluent mission statement.

Finally, there is also a more subtle impact: namespace pollution: every type now has a Should() method, which appears in auto-completion suggestion. This may be kind of distracting.

Definitely, Should() extension method does not fit those bills. So, we kept the static method approach.

The name

There was a long debate on what should be the proper name for the API. We ultimately settled on the now famous Check.That(…):

  • Assert was discarded due to naming conflicts with existing unit testing framework
  • Should was discarded for being weak
  • Verify and others were discarded for varying reasons.

Also, Check is test related in essence while still being assertive.

The design

NFluent has a fluent API, and you build your tests like you would write a sentence.

var heroes = "Batman and Robin";
Check.That(heroes).Not.Contains("Joker").And.StartsWith("Bat").And.Contains("Robin");

Thanks to ‘fluentness’, you just have stop start writing your next check and auto-completion does the rest.

To achieve this ‘fluentness’, NFluent relies heavily on extension methods, which turned out to be the best approach to manage polymorphism and rescue across types.

And it also turned out to be a significant pain in the ass, forcing us to use several code generation strategies to maintain an acceptable level of work.

What is important, is that very little of this surface in the API. There would be no excuse for us if we would impose any unintuitive effort to the users, let aside anything worse, such as boilerplate code.

I also put extra effort into the error messages and the overall error message infrastructure, which had undergone several refactoring phases. I wanted to reach an API where building usual error messages was as easy as possible and the more complex ones still possible. This is crucial to ensure a good consistency among messages.

Conclusion

I hope you are happy users of NFluent. By the way, happy or not, we would be very glad of any feedback you can provide, as it helps us moving forward.

The key messages for your API are:

  1. Put yourself in your users’ shoes. TDD cab go a long way here.
  2. Take the time to identify and design your key entry points (classes, static methods) before V 1.0. This is worth it.
  3. Your APIs must drive your design, never let the inner workings of your library surface through your API.

Keep those 3 rules in mind, they will serve you a long time.