Mechanical sympathy, introduction

Mechanical Sympathy…Jackie Stewart, 1968

This term has been coined by Jackie Stewart, a famous British race driver. He used it to describe his driving philosophy; he spent countless hours with the engineers and mechanics to get a deep understanding of where lied the limits of the mechanic, allowing him to get close to the edge and get the max out of it and when to let it rest a bit.

This strategy helped him win races when competitors were pushing their car too hard.

Gravity

The thought process is obviously complex, but it often relies on mental modelsThose are abstractions the brain use to understand, analyze and interact with the physical world.

They are built through the brain’s learning process, refining them until they are proven accurate enough to act upon them.

For example, we all have a clear mental model of gravity: if we drop an object, it has an accelerated fall. So, this mental model helps us catch something that may have slip out of our hands. If we train hard, we get better at this, until we have a decent juggling ability.

About reality

Coming back to Jackie Stewart:

every driver (incl. you and me) has a mental model of a car, which helps him steer the car; but he can only spend so much time refining his model, i.e. driving and exploring possible situations. And he relies on a very basic model of the car engine, brakes and stickiness of the tires. That’s why he loses most of his skills if it rains or if there is a mechanical malfunction.

By spending a significant time with technicians, James was able to refine his mental models of the various car components. In the process, he could better anticipate the brakes/engine/steering behavior of his car….

Old technology

In my younger years, I was the proud owner of an Atari ST: I was constantly cracking games’ protections, learning how they were written and coding technical demos of some sort. Those demos required expertise and a deep understanding of the guts of the computer, which mostly meant understanding the MC68000 processor.

Its frequency was 8MHz and instructions took between 2 and 20 or more of those cycles, depending on how many memory accesses were required. Typical cycle count was around 6-8, leading to  (roughly) 1M instructions per second (3 orders of magnitude less than now). So if you wanted a 50 fps demo, your algorithm had to fit in 160,000 clock ticks=> ~20,000 instructions.

One time, I had devised a cute scrolling algorithm – remember, no GPU – but I failed short of reaching the expected smoothness: animation was 25 fps not the 50fps I was aiming for. Meaning I needed more than one screen refresh period to update the image…

Tenacity

Something was fishy there, and I had to understand what and why. Reality was not fitting the theory.

Toying with the parameters of the algorithm (reducing text size if my memory serves me we’ll), I was ultimately able to reach the holy grail of smoothness. But my algorithm was now significantly below the 160k cycles barrier, on the paper at least, so I should have reached 50 fps. 

Epiphany

Then it dawned on me and I immediately did write a quick micro benchmark to assess the hypothesis.

Bang!

Actual instructions’ cycle count was always a multiple of four, rounded up (of course). My theory was that the shifter used half of the memory access slots for display purposes (actually, this is a bit trickier than I though). Atari 520 ST Motherboard

I adjusted for that, which meant rewriting part of my algorithm and lowering my ambitions by reducing the amount of moved pixels. And now my scrolling was running smoothly at 50 fps.

Adaptability

My mental model had to be adjusted to match the reality of the hardware. It was an indispensable step to reach my objectives, which definitely performance oriented. But after this failure, I was able to predictably reach 50 fps when needed. I was basically a better demo coder than I was before.

Temporary

In this post, I tried to give you a brief introduction into mechanical sympathy and mental models. I also took the opportunity to brag about my past minor successes.

Doing this, I expected you to start pondering

  • how good is my mental model of the hardware I am working on?
  • are there any signs that I am wrong?
  • can I find some?
  • and foremost, does it matter?

In the next post, I will dig into the models of various parts of a PC and relates this to actual performance impact.

API design in practice: NFluent

A good API should allow developer to fall into the pit of success.

I use this sentence as a mantra whenever I am engaged in an API design exercise, which basically starts when writing my first tests

Selecting the next test is an act of design

In this post, I will give you food for thoughts on the design of APIs, using NFluent as an example. I will also provide some details on the genesis of the library.

The vision

Thomas and myself had the opportunity to assist to a great live coding session  demonstrating refactoring practices, based on the GildedRose kata, performed by David Gageot. At the end of the BBL, Thomas was pretty amazed by the power of Fest Assertions library which was heavily used during the demo. A few weeks later, he committed to build a similar library for the .Net world…

The design

Entry pointUgly APIs are a menace

A few months afterward, I joined the project. At that stage it was close to a first stable release; Thomas and myself engaged on a weeks long discussion, both online and at the office, about what should be the entry point of the library. This is a dear topic to me and I engaged in it passionately: I had wholeheartedly joined the vision and selecting the good entry point was key for me.

A library with a sub par API has no future. I do not care how help it can provide, if the APIs get in the way of getting results, I might as well write it myself.

The form

What should be the form of the API? The mission statement was clear: build a fluent assertion library, allowing check to be expressed in a form close to natural language. Some fluent assertions library, FluentAssertions and Should, rely on extension methods applied to the system under test. For example

...
42.Should().BeEqualTo(42);

myCounter.Should().BeGreaterThan(100);
...

Good isn’t it? From a fluent perspective, yes. But not that good from a TDD perspective. First of all, the SUT must exist and have a type so you can type the assertion code. Supporting and evangelizing TDD is part of the NFluent’s mission statement, so this approach seems counterproductive.

It is also that it is difficult to quickly identify assertions within the test code, but being fluent to write and fluent to read is a key part of NFluent mission statement.

Finally, there is also a more subtle impact: namespace pollution: every type now has a Should() method, which appears in auto-completion suggestion. This may be kind of distracting.

Definitely, Should() extension method does not fit those bills. So, we kept the static method approach.

The name

There was a long debate on what should be the proper name for the API. We ultimately settled on the now famous Check.That(…):

  • Assert was discarded due to naming conflicts with existing unit testing framework
  • Should was discarded for being weak
  • Verify and others were discarded for varying reasons.

Also, Check is test related in essence while still being assertive.

The design

NFluent has a fluent API, and you build your tests like you would write a sentence.

var heroes = "Batman and Robin";
Check.That(heroes).Not.Contains("Joker").And.StartsWith("Bat").And.Contains("Robin");

Thanks to ‘fluentness’, you just have stop start writing your next check and auto-completion does the rest.

To achieve this ‘fluentness’, NFluent relies heavily on extension methods, which turned out to be the best approach to manage polymorphism and rescue across types.

And it also turned out to be a significant pain in the ass, forcing us to use several code generation strategies to maintain an acceptable level of work.

What is important, is that very little of this surface in the API. There would be no excuse for us if we would impose any unintuitive effort to the users, let aside anything worse, such as boilerplate code.

I also put extra effort into the error messages and the overall error message infrastructure, which had undergone several refactoring phases. I wanted to reach an API where building usual error messages was as easy as possible and the more complex ones still possible. This is crucial to ensure a good consistency among messages.

Conclusion

I hope you are happy users of NFluent. By the way, happy or not, we would be very glad of any feedback you can provide, as it helps us moving forward.

The key messages for your API are:

  1. Put yourself in your users’ shoes. TDD cab go a long way here.
  2. Take the time to identify and design your key entry points (classes, static methods) before V 1.0. This is worth it.
  3. Your APIs must drive your design, never let the inner workings of your library surface through your API.

Keep those 3 rules in mind, they will serve you a long time.

Workload management strategies

In a React world, everything is an event that anybody can grab and process according to whatever its responsibility is. The notion of event allows for very low contract coupling; it acts as a medium between classes. A message is similar, but with more coupling as it aggregates endpoints or endpoints addresses.
This model offers flexible design where events flow and are processed through the systems. But what happens when you have too many events to process?

Continue reading “Workload management strategies”

Our Devoxx 2014 talk

Our Devoxx 2014 talk

My mate Thomas Pierrain and I were lucky enough to have our topic selected for Devoxx FR 2014. The subject was the presentation of the sequencer and an iterative design exercise for a financial real-time pricing service.

Many thanks to our amazing audience that gave us interesting questions and good feedback. For those who may be interested, the talk can be seen on Parleys in the Devoxx FR channel.