Pilgrim Blue

The Pilgrim Blue Travel Journal

How we get to where we’re headed.

Crafting Mindful Solutions That Grow Your Business

Pilgrim Blue began with a simple vision, to empower digital product development and innovation through mindful practices. After 20 years of making software I found that my solutions were always geared towards solving a problem by creating something that didn’t exist before. I was adding software into a world to that was growing faster than ever before. Software innovation through the 2010’s was exploding as businesses kept pace with start-ups.

New start-ups were bursting onto the scene daily and either growing rapidly or being acquired by bigger competitors. The start-up scene had a major advantage over the competition, the ability to respond quickly to the market. By operating with a few people at first and looking to create targeted solutions, start-ups had the benefit of defining focused requirements before designing and engineering a product.

Larger organization lacked the agility of their nimble competition. There were decades old processes and lines of business in place that were performing on their original tasks and protected the business from the risk that start-ups were feeding off of. Because a three person shop of college students building apps in their dorm had been able to consume that risk, they were at a huge advantage and the market rewarded them.


Why Software Fails

Before any business starts building a piece of software, they plan. Stakeholders gather with problem or improvement in mind and start to identify the cost, risk, and goals of a project, assign the project a value through a budget. Thus begin the process of identifying solutions while managing the associated risk. The process is then set in motion and in an agreed upon time the solution is implemented and reviewed by the primary stakeholders then released to the market. It succeeds or fails depending on a complex series of decisions between the initial assessment and the final deliverable and hopefully goes through a period of testing to mitigate the risk.

This process has worked for many and will yield results, but how many times have you released a product to market and wondered why it succeeded or failed? I’ve been with many teams struggling with this question after launching some piece of software. Typically I would advocate for a targeted analytics process to identify data points, like user retention, time spent by the user on the site or app, and what users engage with, but much of that is backed into after the software has been developed.

Recently I was having coffee with a friend of mine who is a tech consultant in one of the big consultancies and we were talking about his project. He’s working with a large company to fix a data problem across multiple lines of business. He’s working with his colleagues to find a solution that works with all of these lines of business. I had to level with him that his solutions are going to take a long time to implement and likely involve a lot of compromises that could result in a complex and ineffective solution that addresses the organizations political needs, but isn’t optimized for the core requirements.

This is because my friend is going to implement too big of a solution, because the process in place dictates that result directly. You can adjust the scope of work all you want, at the end of the project optimizing a piece of software to serve multiple business units ensures that the data collected afterwards will lack focus.

If at the end of a program you can’t identify the initial requirements that brought the desired result, then regardless of outcome your software has failed.


How to Fail, Even When You Succeed

Success and failure are relative terms, so in order to ensure constant success we need to define a desired outcome. If your desired outcome is bottom line revenue growth, that’s an easy metric to weigh against but in order to understand what brought that outcome we need a process in place that retains information along the process and provides correlated causes and effects.

In order to correlate the results to a desired outcome, lets look at a simple model. For this we’ll use Acme Banking as a mock company:

  • Desired Outcome: Acme Banking wants a more effective collection process across 5 lines of business.

  • Hypothesis: A centralized AI platform can provide data on how to best interface with customers in collecting loan payments.

  • Result: The AI platform was implemented resulting in 10% growth in revenue by implementing a AI powered communication design through sms and e-mail communication channels.

All of this is super hypothetical but I designed it to address a few key points. First, we could validate 10% growth as positive result, but is that 10% the floor or the ceiling of potential growth? Is that 10% growth because the solution is incredibly effective across one line of business while cannibalizing another?

A simplified example of a waterfall methodology

A simplified example of a waterfall methodology

Without a full understanding of why and how a solution achieves a desired outcome, it is impossible to tell if the implemented solution is operating at its full potential.

The software development process in most of these monolithic organizations is often described as “waterfall” due to its nature of cascading one phase of work into the next. This adds layers of decision making and pivoting between the initial planning and the execution of the solution.


How Start-Ups Succeed Where Others Fail

Start-up culture is built around not just producing results but because of their budgetary constraints and duty to investors enforce a more mindful process. Start-up tech companies report regularly to investors on their growth and operate with the foresight of the initial capital investment. Limited amounts of capital mean to grow the business for another round, you need a compelling narrative supported by data. This has leant itself to success within an agile methodology.

The agile process focuses on establishing a plan to continually adapt. This adaptability allows teams to pivot where needed and lean into its success, but demands fluidity in the desired outcome. Agile is incredible, but the embedded video provides some context around agile. I’ll post a future article with a deeper explanation of agile later.

Larger organizations find agile to be too undefined to establish full confidence among stakeholders. Without exerting control over the desired outcome, I easily understand their resistance to the process.


Spiraling into control

My simplified spiral method

My simplified spiral method

While reaching solutions for a Fortune 500 company recently I found much of the source of their technical woes was a lack of context and communication in the software development process. With some research I found a workflow that I thought would be game changing for how they engineer solutions. It’s called the Spiral Model [1]. This model provided an incredibly compelling argument for a middle path between agile and waterfall processes.

After some research I reached out to a close friend at Google whose been around long enough to know a thing or two. After a long discussion about the pro’s and con’s of such the model, I found validation in my theory. With this validation in hand I started refining the concepts and validating further through conversations with colleagues and got distilled the method to an easily adaptable solution. If you poke around my writing and website enough you’ll find references to this model.

My simplification for this process revolves around four phases of one cycle, or “spiral” as dictated by the method.

Definition of terms

  • Spiral: A phase of work. The allocated time for a single spiral can change depending on the requirements driving the spiral, but most start with a small commitment to validate the initial process.

  • Discovery: The first phase of the process. Here is where we collaborate on gathering details about the business problem inspiring the need.

  • Vision: Here is where cast the vision for the work to be done, we analyze risk and evaluate alternatives. We also synthesis the discovery into action.

  • Creation: Rubber, meet road. This is the phase where we create the deliverable, which in the first spiral is generally a recommended course of action. Later spirals include proof of concepts or production ready code depending on the needs.

  • Decision: Here we collaborate on the decision that will drive the next spiral. Will we conclude our work with a presentation that you take to a development agency? Will we continue research? Will we move to a proof of concept? It’s entirely up to you.


Decision Making and Mindfulness

How does this relate to mindfulness? Ultimately it comes down to the decision phase. I like to try and answer some key questions in this phase.

  • Does the deliverable created address the business problem initially discovered?

  • Has the deliverable capture the vision while mitigating risk, do we feel strongly enough about the conclusion of the deliverable to justify the associated risk?

  • How does the deliverable effect the business as a whole? If the deliverable increases the stress of managing the initial proposal, can we identify the reason and mitigate the impact?

  • Can we agree on where to focus our attention in the next spiral?

This phase of decision making is designed to create introspection and self awareness within the business and keeps everyone on target before continuing the effort and increasing the risk of the program.