Sunday, December 29, 2019

leftovers - ask the business bro (the goal, riffoffs)

Good morning,

As I promised TOA at the end of the ‘Ask The Business Bro’ series about The Goal, here are a few leftover rules of thumb about monitoring the performance of an organization.

Temporary reduction of released material reveals obvious choke points – those whose inventories remain the longest.

We’ll start with the topic from last time’s discussion – how to identify a flow problem. We discussed the theory behind reducing the release of raw material using the river analogy but this thought gets more directly at how the method will work in practice. If a production process has five distinct steps, broadly speaking the choke point is whichever step takes the longest to complete one unit of work.

Longer leads times automatically increase inventory.

This brings a different perspective to the idea that balanced flow comes from a combination of inventory and excess production capacity. If an organization maintains a long lead time, it implies that it requires more time to complete a certain unit of production. This means anything in the production process remains in progress longer than would be the case if the lead time were shorter. The reliable way an organization can reduce lead time is by increasing its excess production capacity.

Instability results from three basic effects. If the product life is short, overproduction might lead to irrelevant inventories and demand is left unsatisfied for a proportionally longer period of a product’s life. If demand is volatile, inventory stock will move in synch with the threat of shortages. If production load fluctuates, due date performance is the first to suffer.

A few weeks ago, we talked about negative fluctuations and how they tend to accumulate over time rather than even out with positive fluctuations. In general, having an appropriate inventory level is a good solution. However, if the organization operates within the instable conditions defined in this thought, it is advisable to opt for greater production capacity ahead of holding inventory.

Setup, process, queue, and wait are the components of how long a part remains in the system. Bottleneck parts spend most of their time in queue (often waiting for another bottleneck part to process) while non-bottleneck parts spend most of their time in wait (often for the bottleneck to finish processing). Bottlenecks dictate inventory levels through this mechanism.

This thought gets into the basic math involved in calculating the system’s performance and highlights the importance of subordinating the measurements to the bottleneck’s capabilities. If the organization is capable of processing one hundred units a day through the bottleneck, the other measurements must work in tandem to have one hundred units ready for processing by the bottleneck each day.

Saving time on setup at a non-bottleneck is an illusory way to save costs. By definition, non-bottlenecks have excess capacity.

This comment extends the above (and also references the most recent post). Let’s suppose we have a simple operation where there are two teams, preparation and production. The first team, preparation, prepares units for the bottleneck and the second team processes these materials through the bottleneck. We can put one hundred units a day through the bottleneck. The first team considers preparation of one hundred units per day as a normal workload.

One day, the team discovers that through a series of efficiencies they are able to prepare two hundred units. Overall, the total cost rises 50%, but on a per-unit basis they save 25% in production cost.

Good, right?

No.

Imagine walking into a pizza restaurant and seeing piles of uncooked pies, stacked all the way to the ceiling – that’s what will happen here. It doesn't matter that each pie was cheaper to produce because the extra parts will simply create added inventory cost. It’s always important to find ways to work smarter but optimize too much on a local level and the organization is harmed because it cannot turn the increased cost of holding inventory into added revenue.

Parts should only be prioritized if there is a shortage downstream to account for. A buffer system helps preemptively identify shortages.

A buffer system is a complicated way of saying that the next batch of work should always arrive at a workstation before the current batch is complete. This way, there is no lost time in the transition from one unit of work to the next.
 
Using a buffer system means many items will finish prior to the due date. If the release date is tracked, a priority system emerges where the oldest parts are worked on first.

The danger of the buffer setup is prioritization. If a team has multiple options regarding what to work on next, the danger is losing certain options within a repeated cycle of prioritization. A good default system is to work on the oldest units of work unless there is an urgent requirement downstream to prioritize newer units based on other factors.

A good starting point for a flow problem is to use half the current lead time as the buffer. This is not likely the optimal move. However, once the change is enacted, the follow up effort will iron out any details missed in the initial step.

This thought gets at the ethos of The Goal. The ideas from this book are far from perfect and following Goldratt’s words to the letter is hardly the formula for success. However, the framework he creates will work for anyone committed to the process of ongoing improvement. An organization trained to attack the follow up effort and iron out the wrinkles in the initial plan will surely find itself positioned for success in the long term.

Just swerved into a passing truck, big business overtaking, without indicating, he passes on the right, been driving through the night, to bring us the best price…

And this thought from TOA favorite Courtney Barnett’s ‘Dead Fox’ brings home a different point. There is always the temptation to solve problems by putting in the extraordinary effort – with perhaps a broken rule or two along the way – but this always introduces needless risk to the process. Sure, we can pass on the right, but a red light ahead forces everyone to stop again. Was that worth the added risk of a crash?

Even if we take all possible measures in the name of lowering costs, we must ask – since risk is always compensated, why would a process that increases risk be a reliable way to lower price?

Beats me, reader.

Thanks for your time.

Signed,

The Business Bro