A post about agility, architecture and scales

Newton vs Einstein: it gets physical

Newtonian physics ruled the mechanical world for a couple of centuries. It was simple, elegant, almost intuitive.

Relative pressure

Then, early twentieth century, an obscure swiss patent clerk, (Einstein if that rings a bell) demonstrated that it did not work at larger scale and it needed adjustments. It was a daring theory, somewhat far-fetched but it has been proven right every time since.

Then a couple of decades later, quantum mechanics wreaked havoc at the smaller scale, demonstrating that anything goes! It was counter intuitive, unfathomable and it is still. particles are probabilistic, you have to choose between knowing their location or their speed…

Quantum mechanics rulz

But Newtonian mechanics still rules our day-to-day life, because it is accurate enough to account for our experiences. But serious physicists need either Relativity or Quantum mechanics to dig deeper at non human scale. And to this day, nobody has been able to reconcile quantum physics and relativity.

What about agility?

Agile values are really people centric. Indeed, when I was first introduced to them, I thought this was a great way to reconcile users with software developers. But beyond that, agile inspired methodologies focus on adaptability whereas traditional project management method focus on careful planning.

But careful planning is only as good as what you use to establish your plan, traditional approaches require a lot of up front information to succeed, a requirements that is difficult to fulfill for software development.

Planning, really?

Instead, Agile focuses on getting the best out of the information you have and the central notion of maximizing information before using it trough the notion of ‘last responsible moment’.

There is no question Agile inspired methodologies had great success and help improve software project success rates.

But none of them scale.

Smaller

They do not scale down: agile method will not help you design a faster sort algorithm or a Sudoku solver.

I mean, I am not sure how individual interactions would help there.

There, design principles, previous works, patterns are the tool you will need.

Agile methodologies have no relevance there.

Larger

They do not scale up either: being focus on people is great for small groups, but how to scale them for hundred, thousands or even larger groups? How do you engage C-level stakeholders? How do you make those organizations embrace change?

That is where you need Enterprise Architecture practices. As it is often the case with any tool or methodology, there are many ways you can fail with those. But a key attributes to success is to have the right attitude, being a facilitator.

But describing Enterprise Architecture is beyond the scope of this post.

As a conclusion

Trying to change a large organization/IS using some agile methodology is akin to trying to use quantum physics to describe a car. Yes any matter is made out of sub-atomic particles, but you will simply not succeed because it is too complex.

Understand that Agility and Enterprise Architecture are related in their objectives, but at very different scale.

Adopt both, and use accordingly.

Repeat and succeed.

Mechanical sympathy, introduction

Mechanical Sympathy…Jackie Stewart, 1968

This term has been coined by Jackie Stewart, a famous British race driver. He used it to describe his driving philosophy; he spent countless hours with the engineers and mechanics to get a deep understanding of where lied the limits of the mechanic, allowing him to get close to the edge and get the max out of it and when to let it rest a bit.

This strategy helped him win races when competitors were pushing their car too hard.

Gravity

The thought process is obviously complex, but it often relies on mental modelsThose are abstractions the brain use to understand, analyze and interact with the physical world.

They are built through the brain’s learning process, refining them until they are proven accurate enough to act upon them.

For example, we all have a clear mental model of gravity: if we drop an object, it has an accelerated fall. So, this mental model helps us catch something that may have slip out of our hands. If we train hard, we get better at this, until we have a decent juggling ability.

About reality

Coming back to Jackie Stewart:

every driver (incl. you and me) has a mental model of a car, which helps him steer the car; but he can only spend so much time refining his model, i.e. driving and exploring possible situations. And he relies on a very basic model of the car engine, brakes and stickiness of the tires. That’s why he loses most of his skills if it rains or if there is a mechanical malfunction.

By spending a significant time with technicians, James was able to refine his mental models of the various car components. In the process, he could better anticipate the brakes/engine/steering behavior of his car….

Old technology

In my younger years, I was the proud owner of an Atari ST: I was constantly cracking games’ protections, learning how they were written and coding technical demos of some sort. Those demos required expertise and a deep understanding of the guts of the computer, which mostly meant understanding the MC68000 processor.

Its frequency was 8MHz and instructions took between 2 and 20 or more of those cycles, depending on how many memory accesses were required. Typical cycle count was around 6-8, leading to  (roughly) 1M instructions per second (3 orders of magnitude less than now). So if you wanted a 50 fps demo, your algorithm had to fit in 160,000 clock ticks=> ~20,000 instructions.

One time, I had devised a cute scrolling algorithm – remember, no GPU – but I failed short of reaching the expected smoothness: animation was 25 fps not the 50fps I was aiming for. Meaning I needed more than one screen refresh period to update the image…

Tenacity

Something was fishy there, and I had to understand what and why. Reality was not fitting the theory.

Toying with the parameters of the algorithm (reducing text size if my memory serves me we’ll), I was ultimately able to reach the holy grail of smoothness. But my algorithm was now significantly below the 160k cycles barrier, on the paper at least, so I should have reached 50 fps. 

Epiphany

Then it dawned on me and I immediately did write a quick micro benchmark to assess the hypothesis.

Bang!

Actual instructions’ cycle count was always a multiple of four, rounded up (of course). My theory was that the shifter used half of the memory access slots for display purposes (actually, this is a bit trickier than I though). Atari 520 ST Motherboard

I adjusted for that, which meant rewriting part of my algorithm and lowering my ambitions by reducing the amount of moved pixels. And now my scrolling was running smoothly at 50 fps.

Adaptability

My mental model had to be adjusted to match the reality of the hardware. It was an indispensable step to reach my objectives, which definitely performance oriented. But after this failure, I was able to predictably reach 50 fps when needed. I was basically a better demo coder than I was before.

Temporary

In this post, I tried to give you a brief introduction into mechanical sympathy and mental models. I also took the opportunity to brag about my past minor successes.

Doing this, I expected you to start pondering

  • how good is my mental model of the hardware I am working on?
  • are there any signs that I am wrong?
  • can I find some?
  • and foremost, does it matter?

In the next post, I will dig into the models of various parts of a PC and relates this to actual performance impact.

Our Devoxx 2014 talk

Our Devoxx 2014 talk

My mate Thomas Pierrain and I were lucky enough to have our topic selected for Devoxx FR 2014. The subject was the presentation of the sequencer and an iterative design exercise for a financial real-time pricing service.

Many thanks to our amazing audience that gave us interesting questions and good feedback. For those who may be interested, the talk can be seen on Parleys in the Devoxx FR channel.

The Sequencer (part 2.1)

Update

I received two more proposals I will comment soon.

Original

I closed the last episode with a little exercise for my readers: suggest me how to complete my requirements, namely by ensuring guaranteed ordering for the sequencer.

Alas, only two were skilled or brave enough to face the challenge and try an answer, and I thank them for that.

The two proposal were similar, but freeman did provide a gist, so let’s discuss it


using System;
using System.Collections.Generic;
using System.Threading;
namespace Sequencer
{
using System.Collections.Concurrent;
public class Sequencer
{
private readonly ConcurrentQueue<Action> _pendingTasks = new ConcurrentQueue<Action>();
private readonly object _running = new Object();
public void Dispatch(Action action)
{
// Queue the task
_pendingTasks.Enqueue(action);
// Schedule a processing run (may be a noop)
ThreadPool.QueueUserWorkItem( x=> Run());
}
// run when the pool has available cpu time for us.
private void Run()
{
if (Monitor.TryEnter(_running))
{
try
{
Action taskToRun;
while (_pendingTasks.TryDequeue(out taskToRun))
{
taskToRun();
}
}
finally
{
Monitor.Exit(_running);
}
}
}
}
}

view raw

Sequencer_1.cs

hosted with ❤ by GitHub

Definitely, this will capture and secure the order of execution. And I like the smart use of TryEnter, allowing to get rid of the boolean used to store the state

But, and this is a big but, this solution violates another (implicit) requirement. I have to apologize for this one, as I failed to state it earlier.

But you know customers: they understand what they need when the development is over :-).

That requirement is fairness: fairness between Sequencer instances as well as fairness between Sequencers and other tasks. Fairness is to be understood as the guarantee that all submitted tasks will eventually executed,that they have equivalent access to execution units (i.e. cores), and with similar delays.
It means that no system can gain exclusive access to execution unit(s) ans that tasks are executed roughly in the order they are submitted.

This being defined, this solution is not fair. Can you tell me why?

Note: here is a gist for the first proposal

Message based concurrency

Message based concurrency is a share nothing approach: no business object is shared between the various collaborating systems. Note that ‘systems’ here covers both application/services and thread, and they are usually referred to as ‘agents‘, ‘actors‘ or ‘nodes‘, depending on the messaging strategy. I will refer to those as ‘agents’ for the rest of this entry.

Instead of sharing entities, agents send messages detailing part or whole current state of one or more entities. For example, whenever a new article is added to a basket by an agent, it will send the content of the basket to the agent dedicates to computing the total value of the basket, which in turn will communicate back the updated total.
This approach is also known as ‘Event driven’ architecture.
Note that this approach fits nicely with inversion of control, and often include facilities allowing any agent to receive notification for any entity it is interested in.

Pros

  • Share nothing means no race condition (at the entity/data level)
  • Message passing helps keeping coupling low
  • It eases horizontal scalability (message can be in or cross process)

Cons

  • Synchronization is hard, but less needed
  • Provides eventual consistency (due to little to no synch)
  • Harder do debug

Overall message based concurrency is a very good model. Learning curve is a bit steep, but resulting systems are stable and scalable. Unless hard consistency is a strong requirement, you should consider using message based concurrency before anything.

This is my personal favorite for quite some time now, and I am still waiting for a use case where it would not be the best approach.

The hard parts are:
– living in an eventually consistent world
– managing asynchronous operations
– identifying the adequate libraries