Categories
AI Partnerships

8 Ways How AI Will Transform Product Line Engineering (PLE)

by Dr. Michael Jastram and Dr. Yang Li

Today’s customers demand products that are perfectly tailored to their needs. At the same time, product complexity is rising. Both trends result in the number of product variants to increase. Co-developing multiple products in a single product line systematically is the focus of Product Line Engineering. However, many product companies started without proper PLE, but realize that they need it sooner or later in order to master the increasing number of variants. Other companies are already operating product lines successfully and would like to optimize their continuous evolution of it. All of them can benefit from AI.

Below we are presenting 8 possibilities how AI could help companies to master these challenges, grouped by the time in the lifecycle of the product line they are applicable.

Getting Started in PLE with AI

Building up a product line with the feature-based PLE needs upfront investment, for example defining the scope of the product line, creating initial feature models holding the variability information, defining the variants, etc. In the following, we describe some potential use cases powered by AI that can help to mitigate the pain and reduce the costs when starting the PLE.

Please note: in this article, we will talk about the possible adoptions of AI techniques in the context of feature-based PLE. If you would like to know more about the feature-based PLE, it’s described in ISO/IEC 26580:2021

1. Extract Features from Legacy Artifacts

Let’s be honest: How many projects start with variant management in mind? Very few. Therefore, many organizations have piles of documentation for existing products, which provide valuable input for starting a product line in the domain of their existing product(s). However, these legacy artifacts are often unstructured and hard to reuse.

There are many ways to use Natural Language Processing (NLP) techniques to analyze these legacy artifacts, like requirements specifications and product descriptions, to extract features, for instance the functional features. For example, extracting the key words/phrases from the requirements specifications could be the initial step to find the candidate feature list. Various techniques can be used to achieve automated key words/phrases extraction, including: TF-IDF (frequency-based method), TextRank (graph-based method), neural word embedding (e.g. BERT), semantic role labeling, or named entity recognition. The  key words/phrases with high significance identified by using the above techniques (or the combination of the above techniques) might represent certain functionalities that can be expressed as features. With the development of neural networks and deep learning, some aforementioned techniques have achieved higher accuracy in the general NLP datasets, which benefits the further adoption of them in the area of PLE. But further analysis is indispensable to refine the feature list and identify the reliable traceability between the extracted features and the legacy artifacts. The traceability enriches the domain knowledge of features with additional documents rather than key words/phrases with limited information, which is very helpful for both manual and automatic analysis.

It’s one thing to have a feature, another to have the description. Once a feature has been identified, AI can help to describe it. For this purpose, it could compare similar feature descriptions in different documents and help merge them into a unified description. Or it could be automatic text summarization techniques to improve the management of all content.

2. Extract Variability Information from Legacy Artifacts

Variability information is vital to PLE.  At the same time it is challenging to identify such information if product variants evolved over years without systematic variant management. What AI can contribute to mitigating the pain of such reverse engineering is to analyze the legacy artifacts to identify the exact/potential variability information, such as whether a feature is optional or mandatory, whether a feature works in combination with other features or conflicts with other features, and how to structure such information in an easily understandable way for users.

In order to extract the relationships among features, the legacy artifacts related to some features need to be further analyzed. For example, AI can help group related features in terms of the functionality, semantic information or potential association mined from the corresponding legacy artifacts. Various techniques, such as language modeling, similarity calculation, clustering, association rule mining and so on, can be used to achieve this goal. Achieving a good structure of the features (i.e. feature model) is a real milestone in the phase of feature modeling, which depends on how similar the features are from the different perspectives, how strong the relationships are and what goals should be achieved via the feature model.

3. Extract Feature Configurations from Legacy Artifacts

Feature configurations for product variants are indispensable. Extracting feature configurations from legacy artifacts can greatly reduce the cost and effort of migrating to the product line as well. The legacy artifacts of similar products can also be seen as the variant-specific assets, but these legacy artifacts were not developed in a feature-model-based method. AI can help recover the missing links between artifacts and features, which can further formulate feature configurations for product variants.

Already existing feature models are prerequisites to the success of this target, since if you don’t know any information about your features in advance, it’s impossible to figure out the feature configurations from the legacy artifacts. Feature models maintain the variability information of a family of products in a structured way. To automate the extraction of configurations, you need to first help computers to understand the meanings of the existing features. That is to say, you have to digitalize the domain knowledge in a format/structure with which computers can easily process, analyze and understand the knowledge. It could be a combination of feature models with knowledge graphs/semantic networks holding the domain knowledge to achieve this target. Furthermore, the initially extracted feature configurations can be analyzed with the assistance of similarity calculation and clustering algorithms to optimize the number of variants and the corresponding feature configurations.

AI Reads Your Customer/Stakeholder’s Minds

It is essential to any business for their products to bring value to their customers. But in today’s fast paced environment, it is more challenging than ever to get this right. The success of one product does not guarantee that the successor will be successful as well.

AI can help to consume customer input as early as possible and to align it with the overall product line strategy.

4. Map User Needs onto Feature Model

Especially in the B2B environment, organizations may end up building a unique product for each customer, based on their requirements. If you are following PLE best practices, then you already have a solid feature model for your product line. New capabilities will be implemented in the context of the existing product line architecture. AI can help you to map user needs onto your feature model for optimal alignment. This can ease the effort of configuring variants.

In practice, this requires a lot of experience. Consider for example a filling station, where each customer has differently shaped containers, requirements for handling the content (Liquid? Pellets?), and so forth. Designing such a system usually crosses different engineering disciplines, lots of stakeholders would be involved in developing the superset engineering assets for the product lines. Such assets include superset requirements, superset test cases, superset source code, superset architecture, etc.

 Hence, the possibility of making use of AI to automatically or interactively map the existing features to the corresponding existing engineering assets in multiple disciplines can reduce the substantial costs and the effort of developing a tailored product.

The techniques discussed here might be similar to what can be applied for extracting features/variability/configurations from legacy assets described in Section Getting Started in PLE with AI. However, these similar techniques can also be used to serve for a different use case here, that is to analyze the relevance or similarity between user needs/assets and existing features.

5. Leverage Customer Input

But where do those user needs come from? AI is already applied in processing customer input for use cases that are unrelated to PLE. This includes: analyzing support tickets or customer support conversations, harvesting user forums for information, processing telemetry from the product, or observing sales behavior.

Today, out-of-the-box AI systems extract valuable information from these data sources, for instance to find out which features come with more positive feedback from customers and which features encounter more complaints from customers powered by automatic sentiment analysis or facial emotion analysis. But considering data privacy, sensitive data such as videos/images containing customers’ personal data must be properly handled. Using this information to improve your product lines is just a small step.

6. Identify Non-Existing Functionality

An interesting variant of the above is the identification of desired but non-existent features. Sometimes customers articulate this themselves, for example in the form of feature suggestions. Sales might also have insights by analyzing lost sales. AI-powered data analysis can help you automatically clean your data, extract the information you are interested in and obtain predictions with pre-trained models. This way, the missing features can be mined with a data-driven methodology rather than directly from human thoughts, for instance by clustering similar ideas via topic modeling techniques. Using the approaches mentioned above, AI maps these new features onto the existing feature model.

Understand and Optimize the PLE Value Chain

The evolution of the product lines is another point that needs to be taken into consideration. With the growth of the product lines and the increase in the number of variant products, how to optimize the product line structure, variability information and variants determines whether the product lines can evolve reasonably, controllably and predictably.

AI can play a role in the evolution of product lines to learn product-line-related knowledge in the process of PLE, helping make rational decisions.

7. Help Decision Makers and other Stakeholders with Analysis

When the product lines are getting evolved over time, lots of data might exist, such as variants, configurations, features, and assets. AI techniques might be helpful to analyze the big data related to the product lines to provide different perspectives for decision makers.

It would be perfect if there was no variability in products for each customer, but that is never the case in reality. There might be the growing variability to meet the needs of different stakeholders leading to an exponential increase of the number of the possible variants. The explosion of variability results in the increasing workload at a large scale for different PLE activities. But, perhaps, not all the variability is necessary for the product lines, and they should be optimized. The big data you have can be used to analyze your product lines with the help of AI.

For example, the structured domain knowledge in digital form (e.g. knowledge graph), extracted and formulated from big data, could be regarded as a central knowledge base that is helpful to automate the analysis process. Moreover, the predictive analytics supported by machine learning and deep learning techniques, such as decision tree, linear/logistic regression, and neural networks can be used to train prediction models on the historical data with known results. Then, the prediction models are able to predict the results with the new data, which speeds up the analysis and optimization of the product variants in an efficient and reliable way.

Especially the combination of in-field usage data of the products with variability information such as feature models and configurations with machine learning can provide new insights into the quality of the current product line, for instance by detecting patterns between feature usage and certain problems of products in the field. These relationships are not easily detectable in the vast amount of information produced with traditional data science techniques.

8. End-to-End PLE Intelligence

As mentioned above, AI can help you: find variability in reverse engineering when starting a product line; make configurations of product variants; identify non-existing features. There is also a possibility of using AI to analyze the big data related to the product lines, which might provide feedback to product portfolio managers or other stakeholders to further improve the product lines. In different PLE activities, multiple separated AIs may exist to tackle different activities – which is fine. But it would be better to have your AI across these activities to learn the interactions.

Why is End-to-End PLE Intelligence important? Because, although each activity may be an independent task, the quality of the result of an activity will affect the next activities, leading to the impact on the entire PLE. The generic pre-trained models might not be sufficient to formulate the End-to-End PLE Intelligence, since the real-time intelligence is vital to End-to-End PLE Intelligence. For example, the real-time changes of the PLE activities and their impacts should be learned to enable the evolution of End-to-End PLE Intelligence. Hence, the design and implementation of the End-to-End PLE Intelligence is a very complex task, which needs the combination of big data, complex deep learning algorithms and good software engineering.

What’s Next?

Imagine a world where you could feed your system with unstructured data, and you get an optimized feature model out, with descriptions, parametrization and all. This would be the “holy grail” of AI in product line engineering.

While all this may not seem revolutionary right now, there is a huge long-term potential by deploying these technologies now. It may make the difference at scaling up and delighting customers – or biting the dust.


About the authors

Dr. Michael Jastram is an entrepreneur with focus on product development technologies. He has a solid technical foundation in software development, having published his first software in 1988.
He understands the connection between customer problem to technical solution by having worked on all levels in between, including software architecture, systems engineering, business consulting and solution architecture.

Michael has published four books, several articles and regularly talks at conferences. He publishes insights on systems engineering in his weekly blog se-trends.de.

Michael spent ten years in the USA, where he acquired a Master’s degree at M.I.T. and worked for various start-ups, both in the San Francisco Bay Area as well as the Boston Area.
He holds a Ph.D. in computer science from the University of Düsseldorf and Dipl.-Ing. from University of Hamburg.

His latest endeavor is the development of a virtual quality assistant. This solution combines MBSE with AI called Semiant, where is acts as Head of Customer Success in a joint venture.

Dr. Yang Li is a Field Application Engineer and also a Consultant at pure-systems.
He shares his knowledge about product line engineering and pure::variants in tutorials, trainings and workshops to help customers on their journey towards a systematic variant management approach.

Yang received his Ph.D. degree in computer science from the University of Magdeburg.
He has focused on the research on adopting artificial intelligence techniques to improve work efficiency in the area of product line engineering.

Categories
AI MBSE

Benefit from MBSE, without MBSE

MBSE – Model Based Systems Engineering – is experiencing a renaissance. Organizations are looking at MBSE to address the challenges of raising complexity and more stringent regulatory requirements. But proper MBSE takes a large investment over a long period of time. And unfortunately, many initiatives wither away due to full commitment from leadership, acceptance issues and underfunded initiatives.

To be successful with MBSE, the trick is to start with specific value-driven use cases. And here is the good news: It is possible to implement such value-driven use cases, without having to confront the team with having to to learn MBSE.

Categories
AI Systems Engineering

AI’s Role in Accelerating Product Development (Free Cutter Report)

We have been developing products more or less successfully for centuries. There have been several disruptive innovations during that time, like the production line the rigorous approach of systems engineering and the introduction of electronics and now software.

Categories
AI MBSE

Machine Translation for better Product Development

Dealing with foreign languages has gotten so much easier recently. Even free translation tools produce superb results. Machine translation is a great example of the digital transformation that we are experiencing.

But why should we stop at translating text into human languages? Languages for describing systems, like SysML, have been around for decades. The method of applying such languages is called MBSE (Model Based Systems Engineering). While MBSE can have benefits when developing complex products, it requires significant investment for tools, training, consulting and configuration.

Model Based Systems Engineering (MBSE) Without the Overhead

Semiant provides you with the benefits of MBSE, but without the investment. Semiant “translates” specifications into a machine-readable system model. From the model, it extracts useful information, makes suggestions or even performs mundane but important activities. This prevents waste, reduces risk and speeds up development.

Semiant automatically “translates” the specification into a machine-readable system model. It uses this model to help you make the specification even better and to derive as much value from the system model as possible.

The idea of automatic translation into machine-readable languages is not new. For instance, asking a voice assistant like Amazon’s Alexa would first translate the question or command from the user into a query that the system would then execute. The user never sees the (automatically translated) query, only the outcome. This is how Semiant works, except that it translates the text into a systems modeling language.

Alexa is a general purpose virtual assistant while Semiant is a product development specific virtual assistant. The following figure shows the similarities between Amazon’s Alexa and Semiant:

Tangible Benefits

This approach allows Semiant to immediately provide the benefits of MBSE to the practitioners without having to learn a new modeling language or tool. It provides value that is convincing to decision makers as well. Specifically, Semiant:

  • Prevents waste: For instance, creating a glossary (or controlled vocabulary) makes sense to support compliance activities. But a glossary created by a human will almost certainly be incomplete and outdated on day one. Semiant will create it automatically, and it will always be up to date.
  • Reduces risk: Having an understanding of open issues reduces the risk of slipping of delivery dates, organizational risk due to undetected gaps in compliance and so on. Semiant even makes suggestions on how to fix issues.
  • Speeds up product development: By automating mundane tasks, like the creation of the initial traceability, you get more done faster with your existing team. Onboarding efficiency will also increase, as new team members get answers to many early questions from Semiant, rather than jamming expert’s time.

More Skills on the Way

Today, Semiant can extract glossaries from specifications. The system model is the foundation for this capability. But in the same way that MBSE has many benefits, we’re just scratching the surface. We will soon add more and more Semiant “Skills” over time to unlock the full potential of MBSE, without anybody having to learn a modeling language.

Photo by Ivan Bandura on Unsplash

Categories
AI

Is Artificial Intelligence Good or Bad?

Call them robots, Skynet or Artificial Intelligence (AI): Literature is full of machines that have a life of their own. The big question is: Will these machines create heaven on earth? Will they strive to destroy us? Or will they just be indifferent?

The scientist Ray Kurzweil introduces the concept of the “Singularity”: This is the point in time where machines will be as intelligent in humans. At this point, these machines will build even better machines. In other words, machine intelligence will surpass human intelligence. This is the runaway-point: We have no idea what will happen thereafter.

Our sole responsibility is to produce something smarter than we are; any problems beyond that are not ours to solve …”

Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology

However, we are still years, if not decades away from this point. At least for now, this is good news: While Artificial Intelligence is far from showing human intelligence, it is smart enough to take over more and more mundane tasks.

One such skill is the ability to process human language. Not “understand”, mind you: A machine today can understand the structure of a sentence, extract concepts and put it in relationship to others. It can even find unusual outliers and point them out to a human. But it cannot understand.

But this still helps: We humans can do the creative work, while the AI tells us where to look.

This is where Semiant, our virtual quality assistant helps experts in product development. Semiant performs the mundane tasks, so that the humans can do the interesting work.

Photo by Keitravis Squire on Unsplash