Sunday, November 5, 2023

Is Your Product Secure Enough? (Part 2 of 2)

For the past 30 years it has become a widely expected, common practice for the Enterprise Software vendors to commit to always shipping zero-defect software. Where any issues were to be found, vendors committed upfront to correct them within predetermined timeline depending on the issue severity. For example, customers would expect all Critical issues to be resolved within 30 days, and all High severity issues  to be resolved within 60 or 90 days.


These days however, it's becoming increasingly evident that this fixed timeline approach, while well intentioned, is impractical and in some ways delusional.

Software vendors can no longer commit in good faith to a fixed number of days it will take to address discovered/reported issues because

  • The number of product security/quality issues we will face in the future is unknown. At any point in time, a new issue may be reported to or discovered by vendors internally (e.g. using code scanning tools).
  • The amount of time to resolve or mitigate any security/quality issue is also unknown in advance.
Software businesses should classify all known security and quality issues into two groups, based on how they impact their operations. 
  • Group 1: This category mandates we drop all our planned activities and throw at it as many of our resources as we have to address the issues as fast as possible (and yes, however improbable it may sound, we may run out of resources, much like some hospitals ran out of beds at the peak of COVID). You may designate these as all Sev 1 (and potentially Sev 2) quality and CVSS 9.0 and higher security issues. I want to stress this category of work trumps everything else. This applies to issues reported to us as well as to issues we discover ourselves.
  • Group 2: This category of issues is managed in a planned manner using managed resource allocation (reviewed/adjusted periodically, e.g. quarterly) and disciplined prioritization. This may include Sev 3/4 quality and CVSS 8.9 and below security issues. For this to work, we must diligently rank all the issues on the list from 1 to N. At any given point, a developer or team assigned to work on this should grab from the list what’s at the top, not what has been sitting the longest on the list. Prioritizing High, Medium, or Low will not work as it leaves discretion to developers to pick from the pool of many issues using their discretion. If we don’t rank, we will not necessarily be addressing the most critical issues at the time. For security issues, we need to have the CVSS score calculated before making the change to the product to reliably justify the complete cost of making the change (not just editing the code). That cost for vendor and customers includes testing by us, packaging and distribution, communication internally and to customers, training our support, customers downloading and deploying a modified version of our product, and then in some cases testing their implementations before ultimately pushing them to their users.
The approach described here will scale better in the face of growing velocity of incoming security challenges and increased uncertainty in a complex world of enterprise software.

What is your experience with this?

Sunday, October 29, 2023

Is Your Product Secure Enough? (Part 1 of 2)

There are different types of security vulnerabilities in every product, and I stress, every product.

  • Those that no one discovered yet
  • Those that have been discovered quietly, the world does not know about and
    1. no one has exploited yet, or 
    2. someone is quietly exploiting them as we speak 
  • Those that the vendor knows about and has not yet made the patch available (for any number of reasons)
  • Those for which the patch exists, but the users have not applied it (for any number of reasons)

My point is there is no such thing as defect/bug/vulnerability free software.

It’s a vendor's responsibility to find the best possible balance between the cost of the solution and the value it provides to the customers, minus the risk of harm it can cause.

A lot has been written about Zero-Defect Software, and while the intent is certainly nobel, organizations that solely focus on this, are missing the point. 
  • This blog about Zero-Defect Software concept written through the eyes of QA lead (Nyall Lynch), applies equally well to dealing with security defects, commonly referred to as vulnerabilities.
The key question Product Managers need to answer: "What level of risk, given the nature of our product and its application, is acceptable" for us as a vendor. In other words: "Is our build secure enough to ship?" 

As well, responsible vendor will ensure the product releases already in customer hands remain secure enough for continued use until these releases reach the end of communicated maintenance window.

In Part 2 I will share some practices that can help large scale enterprise software vendors to continue delivering value in the face of growing uncertainty about product security risks.

Tuesday, December 27, 2022

Third-Party Products - Who Owns Them?


A lot goes into production of complex products, and not just cars. Complex software products commonly incorporate and interact with hundreds of other, third-party software and non-software products.

So what does this mean for Product Managers?

Usually, there is no need for Product Managers to get involved with the third-party products used to build their products. Except for special circumstances, this is left to Architects (technology strategy), Engineering (quality and security), and Legal (the IP rights).


PM does however need to direct and provide recommendations and guidance on the third-party products that interact with and enable the operation of their products.


Allow me to use the automobile industry to illustrate this concept.



Let’s look at the role of the third-party components through the eyes of car manufacturers and car users.


When Toyota is producing its cars, it inevitably embeds the output of other companies in its products. For example, the door panels are formed from metal coming from companies specializing in smelting ore into sheet metal. Toyota evaluates what’s available on the market and elects to use the product it believes is best for its cars. In this example, sheet metal is a third-party product that once chosen by the product vendor is permanently embedded into the product and can’t be swapped out. Let’s call this a Class C third-party product.


Next, the Jeep designers equip their cars with Bosch spark plugs. While these spark plugs can be replaced by the product user (e.g. once they are worn out) the car cannot operate without them. Let’s call this a Class B third-party product.


Finally, the car manufacturer recommends certain conditions, care and the use of some third-party products for its car's optimal performance, safety, and longevity. For example, it may recommend the use of high-octane gasoline/petrol, oil change interval or certain load limits. All these criteria are outside of the car manufacturer’s direct control (they are the responsibility of the user), and yet they are expected to impact the product. Let’s call the third-party product in this category a Class A.


Now, to draw a parallel with Software Products, 

  • Class A is the expected environment, such as hardware configuration, network connectivity, Operating System, web browsers, Java, and .NET that are expected to be provided and maintained by the product user.
  • Class B is the stuff that may be included with the product. The end-user has control over it. The vendor should explain the recommended steps to maintain this stuff in optimal and safe working order, compatible with continued eligibility for the overall product support and maintenance services.
  • Class C is the stuff that gets embedded or compiled into the product. The end-user has no control over it.


Product Management owns support and compatibility strategy, and communication about Class A and Class B third-party products as explained above.


Please let me know your thoughts or experience with this.

Wednesday, October 20, 2021

Team Calls during COVID-19

These days especially, with most of us working remotely, I often find myself on calls with a large number of participants who don't actively participate in the discussion. Their cameras are turned off and their microphones are muted.

Am I alone in this observation?



I am not referring to presentations or town hall type of meetings where information is shared with large groups. These are team calls where active collaboration and decision making are meant to take place.

In the past, when we could meet in person, I could see every participant's body language. I could see when people who did not say a word during the meeting were actually actively listening, leaning forward, or showed signs of disengagement, slumped in their chairs, crossed their hands, checked their phones or computers... I have no such feedback with everyone joining remotely these days. Furthermore, I have a sense our calls on average include more people these days, and I wonder if there is data to confirm or deny this hypothesis.

So, why do I care?

For once, these calls cost companies a lot of money. Usually they include attendees who are smart and well paid, and their time is valuable for the company.

Equally important, but less obvious, I think this practice is obscuring another important set of issues.

While attending these large calls creates a sense of collaboration, signals cultural conformance and seems to foster improved information exchange, in reality most attendees are simply multi-tasking. On the outside this looks like our productivity goes up. We attend more calls, while answering more emails and developing more documents, presentations, reports, quotes, code, etc. In practice, we get overworked, stressed and the quality of our work suffers.

I want to go back in time and only attend calls where everyone pays 100% attention and decisions are made. Everything else I would rather read subject to my time availability, and if I don't have capacity to process more information, I will not pretend I can handle this by attending the call with muted mic and webcam off.

What do you think?

Friday, March 8, 2019

Engaging your online audience

I was wondering recently why no one responded to one of our Architect's community posts seeking for customers' feedback on our product improvement idea, and so I took a closer look: The title is catchy and stands out in the list of posts, so that’s good. 

As is often the case with technical writing, I found the writing style to be the main detractor. Below I am sharing some pointers that would help.

  1. The post is overly verbose. Half (or more) of the content could be chopped without hurting the post objective. By shortening this post more busy people would take a plunge and read it.

  1. Formatting techniques can be employed to improve the readability.

    • Paragraphs would help. Without them everything looks like one big blob of black on white characters. It’s hard to "park" the look on any particular part of the post. Nothing jumps out.
    • Questions could be grouped into concise bullet lists.
    • Headers can be used to break down the post into 2-3 sections, e.g. Background; Challenge; Our  Questions.
    • Bold or underline font could be used to highlight key concepts (just don’t overdo this so it does not look like a glittery Christmas tree).

  1. A picture could help to spice up a post and attract attention.


All of the above pointers are also applicable to writing emails and blogging.

Wednesday, February 20, 2019

Listening to Customers


What’s important in interactions with customers is how we interpret what we hear. As Product Managers//Owners (PO/PM) our challenge is to listen actively and always seek for the underlying pain points.

Customers may be telling us how they want to change our product, but we always need to dig deeper to understand why. Never be afraid to follow one “Why?” with another, with another, and yet with another. Sometimes it takes several attempts to get to the bottom of why the customer really wants us to change the product. If we understand the real need, the real pain point, then we can present the challenge to our engineers and architects in a way that leaves room for them to address the root of the issue in a most creative and effective manner, potentially much better than what the customer had in mind.

Other times, the customer may not even mention the pain point, not realizing they have it. Here again a PO/PM using effective interviewing skills can pick up on potential opportunity to create value for the customer, to alleviate a pain point, to offer them something they would be willing to pay for.

Last, but not least, PO/PMs are managing products, not tailored solutions. We therefore need to look for common needs, ways to improve our products for all (or many) customers not just one. This requires ability co connect the dots, draw parallels, and recognize patterns when interviewing customers. Often times customers describe their needs, issues, challenges using different words and propose different solutions. No one is in a better position than a PO/PM to help translate this stream of data into actionable product roadmap and backlog.

Thursday, February 14, 2019

Stretch Objectives

What do I think about them?


ox·y·mo·ron

Dictionary result for oxymoron

/ˌäksəˈmôrˌän/
noun
a figure of speech in which apparently contradictory terms appear in conjunction (e.g. faith unfaithful kept him falsely true ).


Way too often Stretch Objectives are understood and used by teams in a way that make them a classic case of oxymoron. And no wonder. After all “Stretch” implies uncertainty and hope. “Objectives” on the other hand imply exactly the opposite - certainty and commitment.

Teams may find it more self-fulfilling to present longer lists of objectives at the end of Program Increment (PI) planning by including along with PI Objectives also Stretch Objectives.

The reality however is that the product organization can only count on committed PI Objectives, not on hopeful Stretch Objectives. Stretch Objectives are merely work items that the organization wanted teams to take on but in the end admitted/realized they cannot be committed to in a given timeframe, due to some constraints.

Furthermore, the understanding of relative importance (priority/ranking) to deliver on Stretch Objectives is a snapshot in time, valid only at the time of PI Planning. As teams move through the Program Increment and receive new inputs, the product needs and priorities may change, potentially invalidating or changing priority of the original Stretch Objectives. For this very reason as a good practice teams should re-confirm with Product Management later during PI before work on any Stretch Objective starts.

Here is how SAFE defines Stretch Objectives:
Stretch objectives help improve the predictability of delivering business value since they are not included in the team’s commitment or counted against teams in the program predictable measure.
Unfortunately, way to often teams forget the difference and keep Stretch Objectives on the list just like PI Objectives. Over time I found it best to simply move out all PI Planning candidates back to the parking lot for consideration in future Program Increments.

What's your experience with Stretch Objectives?