Stories at Work

After reading a few heavy technology books since the beginning of 2021, I was looking for a relatively light read and that’s when my senior leader recommended “Stories at Work” by Indranil Chakraborthy. With a recommendation from such an accomplished orator and fantastic storyteller, I bought the book immediately to check on the techniques to benefit from. Being an engineer who takes pride in analytical approach to solve problems, I considered myself to be good at articulating facts and data points. And by false dichotomy, assumed that I cannot be a good storyteller. Indranil broke this myth with the following definition of stories in business and set the tone for some awesome insights!

A story is a fact wrapped in context and delivered with emotion

I usually start any new learning with understanding “Why” it is required. In this case I remembered Yuval Noah Harari’s Sapiens referring to human’s ability to tell stories as a key result of Cognitive Revolution that led to advancement of our societies. In addition, Indranil Chakraborthy provides six compelling reasons why stories are profoundly relevant:

  1. Evolutionary predisposition: Biologists confirm that human brain is predisposed to think in story terms and explain things in story structures. Our brain converts raw experience into story form and then considers, ponders, remembers and acts on a self-created story, not the actual input experience. So, the next time someone nods vigorously indicating understanding of your speech but says something completely different when asked to paraphrase, blame the human brain at work!
  2. Childhood story exposure: We have been telling children stories to teach them values, behavior and build their knowledge. This exposure to stories through the key years of development results in adults who are irrevocably hardwired to think in story terms.
  3. Chemical post-it notes: Daniel Goleman’s “Emotional Intelligence” covers this topic in detail – in essence, emotionally intense events are permanently registered in our emotional brain (amygdala) by neuro-chemicals for super-fast retrieval whenever similar events take place in future.
  4. Neural coupling: When a story is told and it has meaning, brain patterns of the speaker and the listener synch. This can be used to ensure listeners fully comprehend what one wants to convey.
  5. Monkey see, monkey feel: When someone describes pain they went through, we feel the same way. This is due to mirror neurons that are fired up in the minds of both the listener and the storyteller.
  6. Data brain, story brain: Stories impact more areas of our brain than data does and hence stories involve us much more. This increases the likelihood of us taking action when we hear stories and not just data.

To summarize why stories are a powerful way to communicate our message – In this world of increasing noise and clutter, an ability to find an expressway to the listeners’ minds can be the most powerful skill in a leader’s repertoire. Stories can be this expressway if laid out appropriately!

Now that we understand “Why” stories are important in business, the book explores “What” are the four situations where we can start our storytelling journey:

  1. Using stories to build rapport and credibility: When we meet a person or a group for the time, we start with an introduction that is usually filled with our credentials that we expect to build trust. While credentials are an indirect pointer to our character, we can be more effective in building trust by sharing anecdotes from our life. Indranil calls them connection stories” that can help our listeners appreciate our values and result in forming a bond through shared values and beliefs. He also provides a step by step process to create and fine-tune connection stories:
    • Before meeting a new group of people, write down five words or phrases abut your character, values or beliefs that you would like your listener to infer about you.
    • Recollect and jot down an incident from your life where you have displayed many of these character traits.
    • Narrate the incident to someone you trust and write down what they inferred about you from it.
    • Based on the feedback, chisel down the story to just about a minute or less.
    • Tell the story to two other people that will automatically help you refine it further.
    • Retell the story to yourself, starting with the character trait you want your audience to take away.
    • Finally, record your story, transcribe it and fine-tune further by brutally eliminating the words that are unnecessary.
  2. Using stories to influence and overcome objections: When people have a strong belief on an illogical idea, it is usually based on some personal experience or story. Using data or logic to debate against such belief will be futile. The only way to convince people to change under such scenario is to replace it with a more powerful story, which is called influence story. An influence story has to be introduced carefully, using the following steps:
    1. Acknowledge the anti-story: Empathize with the listener’s story and express understanding of reasons behind prevailing belief.
    2. Share the story of the opposite point of view: This is the step to replace with a more powerful story, will be ideal if it can be corroborated.
    3. Make the case: Without offending the listener’s views, explain the need for a change.
    4. Make the point: Finally call for action, maybe to experiment with the change first and see the results for oneself.
  3. Getting strategies to stick: Organizations come up with well-thought vision and value statements but many a times they don’t stick across the organization due to three key reasons – abstraction of language, absence of context or the curse of knowledge. To address these challenges, “clarity stories” using simple English with the following structure are used:
    1. In the past: Articulate how we succeeded in the past using strategy relevant at that time.
    2. Then something happened: Highlight the changes caused by both external and internal factors that have rendered the past strategy irrelevant.
    3. So now: Introduce the new strategy that needs to be adopted to succeed in current reality.
    4. In the future: Explain your vision on how this new strategy will create new opportunities and success in future.
  4. Using stories to share best practices, knowledge or success: Just like we tend to focus on credentials while introducing ourselves, the focus while articulating success or best practices tends to be data points or statistics. Given humans are predisposed to assimilate stories better, the suggestion here is to turn them into “success stories”. Narrate the success as stories placing human characters appropriately for effective reach and impact.

After covering “Why” and “What”, the book goes on to cover “How” to put them together for different business scenarios. I would recommend reading the book for this section (and the previous ones as well for comprehensive understanding).

To summarize, the combination of four story patterns – connection stories, influence stories, clarity stories and success stories – will make external and internal communication more effective and transform an organization. Happy story-telling!

SRE – Management

We covered the motivation behind SRE in the first blogpost of this series, followed by Principles and Practices. Lets complete the foundation with Google’s guidance on how to get SREs working together in a team and working as teams. To ensure SRE approach sticks without the team slipping back to old ways, the new ways of working covered in this blogpost should be incorporated in a structured manner along with the team and the management committing to adhere to them at all costs.

Accelerating SREs to On-Call and Beyond: Educating new SREs on concepts and practices up front will shape them into better engineers and make their skills more robust.

  • Initial Learning Experiences – The Case for Structure Over Chaos: SRE must handle a mix of proactive (engineering) and reactive (on-call) work while traditional Operations teams are predominantly reactive. To position the team for success with proactive work, structured knowledge build-up of the system is essential. Some techniques for getting there:
    • Learning Paths That Are Cumulative and Orderly – Show the new SRE team an orderly path that will infuse confidence that there is a plan to mastery of the system through a combination of education, exposure and experience.
    • Targeted Project Work, Not Menial Work – Make the initial weeks effective by giving the engineers project work that can reinforce their learning.
  • Creating Stellar Reverse Engineers and Improvisational Thinkers: SREs will continue to encounter systems with design patterns that they have not seen before. They need strong reverse engineering skills along with ability to think statistically and improvise fully to untangle without avoid getting stuck.
  • Best Practices for Aspiring On-Callers: For engineers who typically prefer creating new tech solutions, being on-call to troubleshoot production issues can be made interesting with the following practices:
    1. A Hunger for Failure: Reading and Sharing Postmortems
    2. Disaster Role Playing (regular team exercises for new joiners to enact responding to pages)
    3. Break Real Things, Fix Real Things (by simulating volumes or issues in non-critical lower environments)
    4. Documentation as Apprenticeship (by overhauling outdated knowledge base)
    5. Shadow On-Call Early and Often
  • On-Call and Beyond – Rites of Passage and Practicing Continuing Education: Once the engineer has demonstrated ability to handle issues independently, it is time to be formally added to on-call rota and celebrate this milestone as a team. It is important to setup a regular learning series that helps the entire team stay in touch with changes.

Dealing with interrupts: Once the SRE team is in-charge of handling operations, “Managing Operational Load” is the next topic to focus on. Operational Load is the work that must be done to maintain the system in a functional state, and this will interrupt the SRE team working on any other planned project work. So, the objective is to handle such interruptions without distracting the engineers from their cognitive flow state. The interrupts fall into three general categories:

  • Pages concern production alerts and are triggered in response to production emergencies. They are commonly handled by a primary on-call engineer, who is focused solely on on-call work. A person should never be expected to be on-call and also make progress on projects or anything else with a high context switching cost. A secondary on-call engineer provides back-up in case of contingencies.
  • Tickets concern customer requests that require the team to take an action. The primary or secondary on-call engineer can work on tickets when there are no pages to handle. Depending on the nature and priority of tickets, a dedicated person might also be assigned to work on tickets.
  • Ongoing operational responsibilities include activities like team-owned code or flag rollouts, or responses to ad-hoc, time-sensitive questions from customers. An approach similar to handling tickets can be adopted.

Embedding a SRE to Recover from Operational Overload: A burdensome amount of ops work for a prolonged period will be dangerous because the SRE team might burn out or be unable to make progress on project work. One way to relieve this burden is to temporarily transfer a SRE into the overloaded team. Google’s guidance to the SRE who will be embedded on a team:

  • Phase 1: Learn the Service and Get Context – Remind the team that more tickets should not require more SREs and emphasize on healthy work habits that reduce the time spent on tickets. Some of the healthy habits are focusing on non-linear scaling of services, identifying sources of inordinate amount of stress, and identifying emergencies waiting to happen.
  • Phase 2: Sharing Context – After identifying pain points, suggest improvements and demonstrate better ways to work. Some examples are writing a good postmortem for the team or identifying root cause for frequent issues and suggesting solutions.
  • Phase 3: Driving Change – Nudge the team with ideas based on SRE principles and help them self-regulate. This can be done by helping the team fix any basic issues (like defining SLO), coaching team members to address issues in a permanent way or asking leading questions.

Communication and Collaboration in SRE: There is tremendous diversity in SRE teams as it includes people with various skills such as systems engineering, software engineering, project management, etc. Also, given the nature of responsibilities handled by SRE, team members tend to be more distributed across geographical regions and time zones when compared to product development. Considering these aspects, communication and collaboration among SRE teams and across other teams should be designed to address the joint concerns of production and the product in an atmosphere of mutual respect. There should be forums (like weekly Production Meetings) for the SRE team to articulate the state of the system they support and highlight improvement opportunities to Product Development.

The Evolving SRE Engagement Model: The focus so far has been on onboarding SRE support for a product or service that is already in production. While this “classic” engagement model is commonly a good starting point, there are two other models that are better at embedding SRE principles and practices earlier during development lifecycle. Let’s looks at all the three models, starting with the classic one.

  • Simple PRR (Classic) Model: When SRE receives a request for taking over production management, SRE gauges both the importance of the product and the availability of SRE teams. The SRE and development teams then agree on staffing levels to facilitate this support followed by a Production Readiness Review (PPR). Once the gaps and improvements identified from the review are addressed, SRE team assumes its production responsibilities.
  • Early Engagement Model: SRE participates in Design and later phases, eventually taking over the service any time during or after the build phase.
  • Evolving Services Development – Frameworks and SRE Platform: As the industry moves towards microservices architecture, the number of requests for SRE support and the cardinality of services to support will increase. To effectively address the increased demand, all microservices should adopt structured frameworks for production services. These frameworks include codified SRE best practices that are “production ready” by design and reusable solutions to mitigate scalability and reliability issues. A production platform built on top of such frameworks with stronger conventions reduces operational overhead.

These five ways to work should help establish and reinforce SRE teams in an organization. And with this, we come to the end of SRE overview series. I strongly recommend reading Google’s book to get a comprehensive understanding of SRE. As the industry moves further towards microservices and cloud, traditional support model that is predominantly based on manual operations will not be scalable and sustainable. The sooner organizations embark on pivoting towards an engineering-oriented support model with necessary investments in technology and people, the better for products and services they provide.

SRE – Practices

After covering the motivation behind SRE along with the responsibilities and principles in previous blogposts, this one will focus on “how” to get there by leveraging SRE practices used by Google. The book explains 18 practices and I strongly recommend reading the book to thoroughly understand them. I have provided a brief summary of the most common and relevant practices here.

The book has characterized the health of the service similar to Maslow’s hierarchy of human needs, with basic needs at the bottom (starting with Monitoring) and goes up all the way to taking proactive control of the product ‘s future rather than reactively fighting fires. All the practices fall under one of these categories.

Monitoring: Any software service cannot sustain in the long term if customers usually come to know of problems before the service provider. To avoid this situation of flying blind, monitoring has always been an essential part of supporting a service. Many organizations have L1 Service Desk teams that either manually perform runbook based checks or visually monitor dashboards (ITRS, App Dynamics, etc.) looking for any service turning “red”. Both these approaches involve manual activity, which make monitoring less effective and inefficient. Google being a tech savvy organization, always had automated monitoring through custom scripts that check responses and alert.

  • Practical Alerting from Time-Series Data: As Google’s monitoring systems evolved using SRE, they transformed to a new paradigm that made the collection of time-series a first-class role of the monitoring system, and replaced those check scripts with a rich language for manipulating time-series into charts and alerts. Open source tools like Prometheus, Riemann, Heka and Bosun allow any organization to adopt this approach. For organizations still relying heavily on L1 Service Desks, a good starting point will be to use a combination of white-box and black-box monitoring along with a production health dashboard and optimum alerting to eliminate the need for manual operations that only scales linearly.

Incident Response: Incidents that disrupt a software service dependent on numerous interconnected components is inevitable. SRE approaches these incidents as an opportunity to learn and remain in touch with how distributed computing systems actually work. While Incident Response and Incident Management are used interchangeably at some places, I consider Incident Response that includes technical analysis and recovery to be the primary responsibility of SRE team, whereas Incident Management deals with communication with stakeholders and pulling the who response together. Google has also called out Managing Incidents as one of the four practices under Incident Response:

  • Being On-Call is a critical duty for SRE team to keep their services reliable and available. At the same time, balanced on-call is essential to foster a sustainable and manageable work environment for the SRE team. The balance should ensure there is no operational overload or underload. Operational overload will make it difficult for the SRE team to spend at least 50% of their time on engineering activities leading to technology debt and inefficient manual workarounds creeping into support process. Operational underload can result in SREs going out of touch with production creating knowledge gaps that can be disastrous when an incident occurs. On-call approach should enable engineering work as the primary means to scale production responsibilities and maintain high reliability and availability despite the increasing complexity and number of systems and services for which SREs are responsible.
  • Effective Troubleshooting: Troubleshooting is a skill similar to riding a bike or driving a stick-shift car, something that becomes easy once you internalize the process and program your memory to subconsciously take necessary action. In addition to acquiring generic troubleshooting skill, solid knowledge of the system is essential for a SRE to be effective during incidents. Building observability into each component from the ground up and designing systems with well-understood interfaces between components will make troubleshooting easier. Adopting a systematic approach to troubleshooting (like Triage -> Examine -> Diagnose -> Test / Treat cycle) instead of relying on luck or experience will yield good results and better experience for all stakeholders.
  • Emergency Response: “Don’t panic” is the mantra to remember during system failures to be able to recover effectively. And to be able to act without panic, training to handle such situations is absolutely essential. Test-Induced emergency helps SRE proactively prepare for such eventualities, make changes to fix the underlying problems and also identify other weaknesses before they became outages. In real life, emergencies are usually change-induced or process induced and SREs learn from all outages. They also document the failure modes for other teams to learn how to better troubleshoot and fortify their systems against similar outages.
  • Managing Incidents: Most organizations already have an ITIL based Incident management process in place. SRE team strengthens this process by focusing on reducing mean time to recovery and providing staff a less stressful way to work on emergent problems. The features that can help achieve this are recursive separation of responsibilities, a recognized command post, live incident state document and clear handoff.

Postmortem and Root Cause Analysis: SRE philosophy aims to manually solve only new and exciting problems in production unlike some of the traditional operations-focused environments that end up fixing the same issue over and over.

  • Postmortem Culture of Learning from Failure has primary goals of ensuring that the incident is documented, all contributing root causes are well understood and effective preventive actions are put in place to reduce the likelihood and impact of recurrence. As the postmortem process involves inherent cost in terms of time and effort, well defined triggers like incident severity is used to ensure root cause analysis is done for appropriate events. Blameless postmortems are a tenet of SRE culture.

Testing: The previous practices help handle problems when they arise but preventing such problems from occurring in the first place should be the norm.

  • Testing for Reliability is the practice that helps adapting classical software testing techniques to systems at scale and improve reliability. Traditional tests during software development stage like unit testing, integration testing and system testing (smoke, performance, regression, etc.) help ensure correct behavior of the system before it is deployed into production. Production tests like stress / canary / configuration tests are similar to black-box monitoring that help proactively identify problems before users encounter them and also help staggered rollouts that limits any impacts in production.

Capacity Planning: Modern distributed systems built using component architecture are designed to scale on demand and rely heavily on diligent capacity planning to achieve it. The following four practices are key:

  • Load balancing at the Frontend: DNS is still the simplest and most effective way to balance load before the user’s connection even starts but has limitations. So, the initial level of DNS load balancing should be followed by a level that takes advantage of virtual IP addresses.
  • Load balancing in the data center: Once the request arrives at the data center, the next step is to identify the right algorithms for distributing work within a given datacenter for a stream of queries. Load balancing policies can be very simple and not take into account any information about the state of the backends (e.g., Round Robin) or can act with more information about the backends (e.g., Least-Loaded Round Robin or Weighted Round Robin).
  • Handling Overload: Load balancing policies are expected to prevent overload but there are times when the best plans fail. In addition to data center load balancing, per-customer limits and client-side throttling will help spread load over tasks in a datacenter relatively evenly. Despite all precautions, when backend is overloaded, it need not turn down and stop accepting all traffic. Instead, it can continue accepting as much traffic as possible, but to only accept that load as capacity frees up.
  • Addressing cascading failures: A cascading failure is one that grows over time as a result of positive feedback. It can occur when a portion of an overall system fails, increasing the probability that other portions of the system fail. Increasing resources, restarting servers, dropping traffic, eliminating non-critical load, eliminating bad traffic are some of the immediate steps that can address cascading failures.

Development: All the practices covered so far deal with handling reliability after software development is complete. Google recommends significant large-scale system design and software engineering work within the organization to enable SRE through following practices:

  • Managing Critical State – Distributed Consensus for Reliability: CAP Theorem provides the guiding principle to determine the properties that are most critical. When dealing with distributed software systems, we are interested in asynchronous distributed consensus, which applies to environments with potentially unbounded delays in message passing. Distributed consensus algorithms allow a set of nodes to agree on a value once but don’t map well to real design tasks. Distributed consensus adds higher-level system components such as datastores, configuration stores, queues, locking, and leader election services to provide the practical system functionality that distributed consensus algorithms don’t address. Using higher-level components reduces complexity for system designers. It also allows underlying distributed consensus algorithms to be changed if necessary in response to changes in the environment in which the system runs or changes in nonfunctional requirements.
  • Distributed Periodic Scheduling with Cron, Data Processing Pipelines and ensuring Data Integrity: What You Read Is What You Wrote are other practices during Development.

Product is at the top of the pyramid for any organization. Organizations will benefit by practicing Reliable Product Launches at Scale using Launch Coordination Engineering role to setup a solid launch process with launch checklist.

These practices shared by Google provide a comprehensive framework to adopt across software development lifecycle to improve reliability, resilience and stability of systems.