Categories
Requirements Semiant Skill

Check Requirements Quality Everywhere With One Click

As AI proliferates, more and more tools for their application appear on the market. This includes tools and add-ons for requirements quality checking.

Semiant is unique, as it works in every web application or website, being browser based. This includes requirements management tools like Jama Connect, Polarion, Codebeamer or DOORS Next.

We are proud to announce that we just implemented a requirements quality check that we make available for free. It is based on Vale, an open source quality checker.

How it works

The new quality check is easy to use: just open any web page / web app, click the Semiant icon, and the requirements to be checked are highlighted in gray. Once the check is complete, the text is highlighted in green (no problem) or red, with error marks. The following 13-second video shows how it works in Confluence:

Try it Yourself!

If you want to try Semiant with Vale, you can install Semiant from the Chrome Store. To use it, it is mandatory to create an account.

After logging in, the icon should be made permanently visible and Semiant should be configured (Semiant Icon > Options). In particular, we recommend setting Vale as default (and optionally disabling the other services):

Recommended Semiant settings to launch Vale as the default skill

Enabled Quality Checking Rules

In the free version of Semiant, the following rules are enabled (as of August 2022). Of course, this may change over time. In particular, these are standard rules of the Vale project. The rules were not designed specifically for requirements. The rules are designed for the English language. Some are simple, others are quite complex:

  • Cliches: Certain phrases are cautioned, such as “a far cry” or “in a nutshell”
  • Illusions: Word repetition.
  • Passive: Passive sentences.
  • So: A stentence begins with “so”
  • ThereIs: At the beginning of a sentence, “There is”, “There are”, etc. is prohibited.
  • Adverbs: A list of adverbs that are not recommended, such as “quickly” or “rarely”.
  • Contractions: Compounds should be used, as “aren’t” instead of “are not”.
  • DateOrder: To avoid confusion, months must be written out with letters.
  • Ellipses: Three dots: …
  • Ordinal: Enumerations such as “Secondly” should not have an “-ly”.
  • OxfordComma: In enumerations, a comma before “and”.
  • Semicolon: A semicolon indicates a complex sentence and should not be used.
  • SentenceLength: Sentences longer than 30 words.
  • Suspended: Abbreviation for enumeration with hyphen.
  • Wordiness: Simplifications with concrete suggestions, e.g. “always” instead of “at all times”

There are many more rules in the Vale library. However, these are the ones that we think are useful for requirement checking. All rules can be found in the Vale repository.

Custom rules for requirement checking

The notation in Vale for rules is quite simple. In particular, it is not witchcraft to create rules in other languages.

The simplest are lists of words, as for example weak-words. In principle it is also possible to check sentence templates. However, this is where Vale reaches its limits. After all, the task is to find errors in the application of templates. This in turn means that we have to knit our logic in such a way that errors are detected.

An example: a successful template is the format for user stories: “As <ACTOR> I want <FUNCTION> so that <BENEFIT>”. A rule could match the first part and note the missing benefit as an error. Here it already becomes clear that it can get tedious.

For a really well-functioning system, several approaches usually need to be combined, so Vale alone is not necessarily enough.

Categories
Semiant

Semiant Webinar: Accelerate Product Development With AI

Is your product team stretched thin? Is everybody demanding time with your experts, keeping them from getting work done? Then add Semiant to your team to relieve your experts and to align your team.

Get a live demonstration on how Semiant works in any web-based application like DOORS Next, Jama or Jira. Learn what we have planned for the future. Discover how Semiant can save you time and reduce risk in product development.

Not sure what Semiant is all about?

Would you like to learn more? Then let’s have a 15-minute conversation:

Categories
AI Partnerships

8 Ways How AI Will Transform Product Line Engineering (PLE)

by Dr. Michael Jastram and Dr. Yang Li

Today’s customers demand products that are perfectly tailored to their needs. At the same time, product complexity is rising. Both trends result in the number of product variants to increase. Co-developing multiple products in a single product line systematically is the focus of Product Line Engineering. However, many product companies started without proper PLE, but realize that they need it sooner or later in order to master the increasing number of variants. Other companies are already operating product lines successfully and would like to optimize their continuous evolution of it. All of them can benefit from AI.

Below we are presenting 8 possibilities how AI could help companies to master these challenges, grouped by the time in the lifecycle of the product line they are applicable.

Getting Started in PLE with AI

Building up a product line with the feature-based PLE needs upfront investment, for example defining the scope of the product line, creating initial feature models holding the variability information, defining the variants, etc. In the following, we describe some potential use cases powered by AI that can help to mitigate the pain and reduce the costs when starting the PLE.

Please note: in this article, we will talk about the possible adoptions of AI techniques in the context of feature-based PLE. If you would like to know more about the feature-based PLE, it’s described in ISO/IEC 26580:2021

1. Extract Features from Legacy Artifacts

Let’s be honest: How many projects start with variant management in mind? Very few. Therefore, many organizations have piles of documentation for existing products, which provide valuable input for starting a product line in the domain of their existing product(s). However, these legacy artifacts are often unstructured and hard to reuse.

There are many ways to use Natural Language Processing (NLP) techniques to analyze these legacy artifacts, like requirements specifications and product descriptions, to extract features, for instance the functional features. For example, extracting the key words/phrases from the requirements specifications could be the initial step to find the candidate feature list. Various techniques can be used to achieve automated key words/phrases extraction, including: TF-IDF (frequency-based method), TextRank (graph-based method), neural word embedding (e.g. BERT), semantic role labeling, or named entity recognition. The  key words/phrases with high significance identified by using the above techniques (or the combination of the above techniques) might represent certain functionalities that can be expressed as features. With the development of neural networks and deep learning, some aforementioned techniques have achieved higher accuracy in the general NLP datasets, which benefits the further adoption of them in the area of PLE. But further analysis is indispensable to refine the feature list and identify the reliable traceability between the extracted features and the legacy artifacts. The traceability enriches the domain knowledge of features with additional documents rather than key words/phrases with limited information, which is very helpful for both manual and automatic analysis.

It’s one thing to have a feature, another to have the description. Once a feature has been identified, AI can help to describe it. For this purpose, it could compare similar feature descriptions in different documents and help merge them into a unified description. Or it could be automatic text summarization techniques to improve the management of all content.

2. Extract Variability Information from Legacy Artifacts

Variability information is vital to PLE.  At the same time it is challenging to identify such information if product variants evolved over years without systematic variant management. What AI can contribute to mitigating the pain of such reverse engineering is to analyze the legacy artifacts to identify the exact/potential variability information, such as whether a feature is optional or mandatory, whether a feature works in combination with other features or conflicts with other features, and how to structure such information in an easily understandable way for users.

In order to extract the relationships among features, the legacy artifacts related to some features need to be further analyzed. For example, AI can help group related features in terms of the functionality, semantic information or potential association mined from the corresponding legacy artifacts. Various techniques, such as language modeling, similarity calculation, clustering, association rule mining and so on, can be used to achieve this goal. Achieving a good structure of the features (i.e. feature model) is a real milestone in the phase of feature modeling, which depends on how similar the features are from the different perspectives, how strong the relationships are and what goals should be achieved via the feature model.

3. Extract Feature Configurations from Legacy Artifacts

Feature configurations for product variants are indispensable. Extracting feature configurations from legacy artifacts can greatly reduce the cost and effort of migrating to the product line as well. The legacy artifacts of similar products can also be seen as the variant-specific assets, but these legacy artifacts were not developed in a feature-model-based method. AI can help recover the missing links between artifacts and features, which can further formulate feature configurations for product variants.

Already existing feature models are prerequisites to the success of this target, since if you don’t know any information about your features in advance, it’s impossible to figure out the feature configurations from the legacy artifacts. Feature models maintain the variability information of a family of products in a structured way. To automate the extraction of configurations, you need to first help computers to understand the meanings of the existing features. That is to say, you have to digitalize the domain knowledge in a format/structure with which computers can easily process, analyze and understand the knowledge. It could be a combination of feature models with knowledge graphs/semantic networks holding the domain knowledge to achieve this target. Furthermore, the initially extracted feature configurations can be analyzed with the assistance of similarity calculation and clustering algorithms to optimize the number of variants and the corresponding feature configurations.

AI Reads Your Customer/Stakeholder’s Minds

It is essential to any business for their products to bring value to their customers. But in today’s fast paced environment, it is more challenging than ever to get this right. The success of one product does not guarantee that the successor will be successful as well.

AI can help to consume customer input as early as possible and to align it with the overall product line strategy.

4. Map User Needs onto Feature Model

Especially in the B2B environment, organizations may end up building a unique product for each customer, based on their requirements. If you are following PLE best practices, then you already have a solid feature model for your product line. New capabilities will be implemented in the context of the existing product line architecture. AI can help you to map user needs onto your feature model for optimal alignment. This can ease the effort of configuring variants.

In practice, this requires a lot of experience. Consider for example a filling station, where each customer has differently shaped containers, requirements for handling the content (Liquid? Pellets?), and so forth. Designing such a system usually crosses different engineering disciplines, lots of stakeholders would be involved in developing the superset engineering assets for the product lines. Such assets include superset requirements, superset test cases, superset source code, superset architecture, etc.

 Hence, the possibility of making use of AI to automatically or interactively map the existing features to the corresponding existing engineering assets in multiple disciplines can reduce the substantial costs and the effort of developing a tailored product.

The techniques discussed here might be similar to what can be applied for extracting features/variability/configurations from legacy assets described in Section Getting Started in PLE with AI. However, these similar techniques can also be used to serve for a different use case here, that is to analyze the relevance or similarity between user needs/assets and existing features.

5. Leverage Customer Input

But where do those user needs come from? AI is already applied in processing customer input for use cases that are unrelated to PLE. This includes: analyzing support tickets or customer support conversations, harvesting user forums for information, processing telemetry from the product, or observing sales behavior.

Today, out-of-the-box AI systems extract valuable information from these data sources, for instance to find out which features come with more positive feedback from customers and which features encounter more complaints from customers powered by automatic sentiment analysis or facial emotion analysis. But considering data privacy, sensitive data such as videos/images containing customers’ personal data must be properly handled. Using this information to improve your product lines is just a small step.

6. Identify Non-Existing Functionality

An interesting variant of the above is the identification of desired but non-existent features. Sometimes customers articulate this themselves, for example in the form of feature suggestions. Sales might also have insights by analyzing lost sales. AI-powered data analysis can help you automatically clean your data, extract the information you are interested in and obtain predictions with pre-trained models. This way, the missing features can be mined with a data-driven methodology rather than directly from human thoughts, for instance by clustering similar ideas via topic modeling techniques. Using the approaches mentioned above, AI maps these new features onto the existing feature model.

Understand and Optimize the PLE Value Chain

The evolution of the product lines is another point that needs to be taken into consideration. With the growth of the product lines and the increase in the number of variant products, how to optimize the product line structure, variability information and variants determines whether the product lines can evolve reasonably, controllably and predictably.

AI can play a role in the evolution of product lines to learn product-line-related knowledge in the process of PLE, helping make rational decisions.

7. Help Decision Makers and other Stakeholders with Analysis

When the product lines are getting evolved over time, lots of data might exist, such as variants, configurations, features, and assets. AI techniques might be helpful to analyze the big data related to the product lines to provide different perspectives for decision makers.

It would be perfect if there was no variability in products for each customer, but that is never the case in reality. There might be the growing variability to meet the needs of different stakeholders leading to an exponential increase of the number of the possible variants. The explosion of variability results in the increasing workload at a large scale for different PLE activities. But, perhaps, not all the variability is necessary for the product lines, and they should be optimized. The big data you have can be used to analyze your product lines with the help of AI.

For example, the structured domain knowledge in digital form (e.g. knowledge graph), extracted and formulated from big data, could be regarded as a central knowledge base that is helpful to automate the analysis process. Moreover, the predictive analytics supported by machine learning and deep learning techniques, such as decision tree, linear/logistic regression, and neural networks can be used to train prediction models on the historical data with known results. Then, the prediction models are able to predict the results with the new data, which speeds up the analysis and optimization of the product variants in an efficient and reliable way.

Especially the combination of in-field usage data of the products with variability information such as feature models and configurations with machine learning can provide new insights into the quality of the current product line, for instance by detecting patterns between feature usage and certain problems of products in the field. These relationships are not easily detectable in the vast amount of information produced with traditional data science techniques.

8. End-to-End PLE Intelligence

As mentioned above, AI can help you: find variability in reverse engineering when starting a product line; make configurations of product variants; identify non-existing features. There is also a possibility of using AI to analyze the big data related to the product lines, which might provide feedback to product portfolio managers or other stakeholders to further improve the product lines. In different PLE activities, multiple separated AIs may exist to tackle different activities – which is fine. But it would be better to have your AI across these activities to learn the interactions.

Why is End-to-End PLE Intelligence important? Because, although each activity may be an independent task, the quality of the result of an activity will affect the next activities, leading to the impact on the entire PLE. The generic pre-trained models might not be sufficient to formulate the End-to-End PLE Intelligence, since the real-time intelligence is vital to End-to-End PLE Intelligence. For example, the real-time changes of the PLE activities and their impacts should be learned to enable the evolution of End-to-End PLE Intelligence. Hence, the design and implementation of the End-to-End PLE Intelligence is a very complex task, which needs the combination of big data, complex deep learning algorithms and good software engineering.

What’s Next?

Imagine a world where you could feed your system with unstructured data, and you get an optimized feature model out, with descriptions, parametrization and all. This would be the “holy grail” of AI in product line engineering.

While all this may not seem revolutionary right now, there is a huge long-term potential by deploying these technologies now. It may make the difference at scaling up and delighting customers – or biting the dust.


About the authors

Dr. Michael Jastram is an entrepreneur with focus on product development technologies. He has a solid technical foundation in software development, having published his first software in 1988.
He understands the connection between customer problem to technical solution by having worked on all levels in between, including software architecture, systems engineering, business consulting and solution architecture.

Michael has published four books, several articles and regularly talks at conferences. He publishes insights on systems engineering in his weekly blog se-trends.de.

Michael spent ten years in the USA, where he acquired a Master’s degree at M.I.T. and worked for various start-ups, both in the San Francisco Bay Area as well as the Boston Area.
He holds a Ph.D. in computer science from the University of Düsseldorf and Dipl.-Ing. from University of Hamburg.

His latest endeavor is the development of a virtual quality assistant. This solution combines MBSE with AI called Semiant, where is acts as Head of Customer Success in a joint venture.

Dr. Yang Li is a Field Application Engineer and also a Consultant at pure-systems.
He shares his knowledge about product line engineering and pure::variants in tutorials, trainings and workshops to help customers on their journey towards a systematic variant management approach.

Yang received his Ph.D. degree in computer science from the University of Magdeburg.
He has focused on the research on adopting artificial intelligence techniques to improve work efficiency in the area of product line engineering.

Categories
Requirements Semiant Skill

Check Requirements Quality with a Single Click

A key use case in product development is the authoring and reviewing of requirements. The quality of the requirements matters: Incorrect or incomplete requirements can generate issues down the line can be expensive, create delays and even lead to product recalls.

For this reason, some requirements management tools include AI-based requirements analysis. But there are two problems with this: First, those checkers only work with exactly one requirements tool – the one that they are built in. And second, they are not customizable, which means that they cannot take organizational authoring rules into account.

Semiant now provides a requirements quality assistant that takes both issues into account

We partnered up with Qualicen, a leader in AI-based requirements analysis with natural language processing (NLP). We integrated Holmes, their requirements analysis engine, as a new skill into Semiant. This means that you can perform requirements quality analysis with a single click anywhere: In your favorite requirements management tool or when analyzing your competitor’s datasheet.

Would you like to see the Holmes quality check in action? Then install Semiant in Chrome or Edge, log in and activate Quality Check in the Options

Using the quality check is quite intuitive. You can see Holmes in the following screenshot (assuming that the Holmes has been enabled):

Holmes Requirements Quality Check is Now a Semiant Skill

Upon activating Semiant via the browser extension icon, you will see the familiar sidepane. But now it features tabs: You can see in the screenshot that the glossary manager is still available on a different tab.

The sidepane shows statistics on the requirements quality of the active specification. In the screenshot, you see a specification in Jama Connect.

You can dive into specific problems, as Semiant highlights issues directly in the text. The screenshot shows the word “many”, identified as a weak word. Hovering over the word will produce a tooltip with additional information on the problem and guidance on how to fix it.

Speed up Reviews, Unburden Your Team and Improve Requirements Quality

Semiant and Holmes together are easy to use and provide immediate value. Therefore, your team will start using it right away. This leads to higher quality requirements, wich in turn leads to faster and better reviews – and less rework.

Even better, Holmes works everywhere (as long as it’s in a web browser). You are therefore not limited to a quality check in Jama: You can check quality in Confluence, Polarion, gitHub or on a Wiki.

Holmes is highly customizable. Therefore, we can tailor it to your organization’s needs.

What’s that smell?

Like software code, requirements can become “smelly” over time. This means that over time, they get outdated, inconsistent, badly structures, etc. Holmes consists of modular smell checkers. As part of the tailoring, we would enable and configure those smell checkers that matter to you and your organization.

List of Smell checkers for analyzing requirements quality
List of Holmes smell checkers for analyzing requirements quality

In addition, we can customize Holmes for your organization’s authoring rules. Consistency of texts makes it easier to read them and to spot problems early, thereby saving time and reducing the need for rework.

Can Holmes help your team?

If you think that Semiant, together with the Holmes requirements analysis could add value to your team, then let’s have a conversation.

Image Source: Qualicen

Categories
Partnerships Requirements

Semiant partnering with Qualicen

The quality of product development is of strategic importance for many companies because problems can lead to delays, costly recalls or even personal injury. Increasing complexity and stricter compliance requirements are exacerbating the situation. Traditional approaches are reaching their limits and jeopardizing economic success. What to do?

The use of AI solutions in product development is already showing promising results. And one or the other reader has certainly followed my activities in the field of AI. I have been working on a virtual quality assistant under the name Semiant for about a year. I have been following the activities of the Qualicen company for a while. Now we have decided to continue on this path together.

With Semiant and Michael Jastram, we are gaining a strong partner who understands our customers and will play a key role in shaping the vision of Holmes in order to create the greatest possible benefit for our users

Dr. Sebastian Eder, General Manager Qualicen GmbH

Under the name Holmes , Qualicen is working on a platform that uses artificial intelligence to relieve teams in product development. Holmes also uses Natural Language Processing (NLP) to process human-written content. What both solutions have in common is that they can be used to automate many important but monotonous and error-prone tasks for which employees are often overqualified. For product development, this means more efficient work, fewer risks and a lot of time saved.

Customer Benefit

For Semiant customers and those of Qualicen, not much will change at first. Together, however, we can act faster with the enlarged team. In the medium term, we want to develop a product with Holmes that can be used immediately without much adjustment and delivers measurable results. Which use cases we’ll tackle first isn’t clear yet, but we have a lot of ideas.

Image Source: Unsplash

Categories
AI MBSE

Benefit from MBSE, without MBSE

MBSE – Model Based Systems Engineering – is experiencing a renaissance. Organizations are looking at MBSE to address the challenges of raising complexity and more stringent regulatory requirements. But proper MBSE takes a large investment over a long period of time. And unfortunately, many initiatives wither away due to full commitment from leadership, acceptance issues and underfunded initiatives.

To be successful with MBSE, the trick is to start with specific value-driven use cases. And here is the good news: It is possible to implement such value-driven use cases, without having to confront the team with having to to learn MBSE.

Categories
Glossary Semiant

Work more efficiently with Semiant glossary browser extension

We are torn what to think of glossaries: on the one hand, everyone on the team is grateful when there’s a well-maintained glossary. On the other hand, no one wants to create and maintain it. The coordination process can also lead to disgruntlement and eat up a lot of time. And even if there is a good glossary: There’s a big risk that many stakeholders won’t hear about it and it will gather dust in a drawer.

We just released several glossary-related features of Semiant. Hopefully, we can help the glossary achieve a renaissance. We invite all readers to try out the latest version of Semiant.

Usefulness of a glossary

For a glossary to be useful, it must be accurate and used. When that’s the case, a glossary has a lot of benefits:

  • Fewer misunderstandings save time, avoid mistakes, and reduce friction
  • New employees can familiarize themselves more quickly and take up less of the experts’ time
  • It is less error-prone to deal with product lines and variants
  • Maintenance over the product lifecycle is simplified

Standard glossary or controlled vocabulary?

Why reinvent the wheel? After all, there are ready-made glossaries on many specialized topics, from “automotive” to “zoology”. And indeed, if there is a suitable glossary, we should use it. For example, INCOSE’s Systems Engineering Handbook includes a glossary (Appendix C: Terms and Definitions).

Standard glossaries are useful in education. But in projects, a standard glossary rarely covers all relevant terms. In addition, terms that are familiar to certain stakeholders may have different meanings. Therefore, controlled vocabulary is an important category of a glossary that can complement or replace a standard glossary. Ideally, then, we have both a standard glossary and a specific glossary. This can be project- department- or organization-specific.

Merge glossaries intelligently with Semiant with the browser extension

Semiant is a virtual quality assistant that performs mundane tasks in product development. To do this, Semiant hooks the glossary into any web-based application via a web browser extension (currently only for Chrome and Edge).

The current version of Semiant is minimal, but already supports multiple sources for the glossary and allows merging in the web browser. How this works can be seen in this video:

Development partner wanted

As you can see in the video, Semiant is currently a minimal demonstrator. However, this shows that the technology works in principle. We have many questions and ideas on where to go from here. We are looking for a development partner who recognizes the potential and wants to help determine the further direction os Semiant. Specific ideas and questions are:

  • Which data sources should we support? Currently Semiant reads existing glossaries from RDF/Turtle data sources.
  • How useful is glossary term extraction via Natural Language Processing (NLP)?
  • What team and categorization capabilities should we implement? Also, how expressive should the data structures be?
  • How important is the use case of gathering contextually relevant information from applicable documents and standards?
  • How can we improve the usability of the glossary with the browser extension?

Have we aroused your interest?

Then let’s have a converation >>

Categories
Semiant Traceability

What tools shall Semiant integrate with?

Like any team member, a virtual quality assistant has to access data. Semiant will access the same tools that the other team members use. The question is: What tools are relevant, and which are the most important tools? Which tool shall we integrate with first?

Please help us to answer this question in this poll. And please read on to find out what Semiant would use the integration for.

Integrate with Data Sources

The primary use case for integration is to access data sources. Semiant will analyze the integrated data source to extract glossary terms. This will also allow to establish traceability between the glossary terms and the “items” in that specification.

What exactly an item is depends on the integrated tool. In a classical requirements management tool (Polarion, Jama, etc.), this would be an individual requirement. In a document system or Wiki (Confluence, Google Docs), this could be a sentence, a paragraph or section.

Integration for Other Use Cases

Eventually, we need integrations for other use cases. For instance, Semiant might integration with a task management system to communicate with users. This could be anything like build-in collaboration, which many tools provide. But it could also be an external collaboration platform, like Slack.

Any other thoughts?

If you have an important use case or integration need that is not covered here, then please reach out by using the contact button at the bottom of this page. Also, check out our 2021 roadmap.

Image Source: FreePic

Categories
Semiant

2021 Semiant Roadmap

We officially launched Semiant in July 2021. Semiant uses Natural Language Processing (NLP) to identify key concepts found in your product specifications. From this, it produces an interactive glossary with traceability into your specification. This forms the basis for a solution that aligns language in your team and enables collaboration across departmental boundaries.

Roadmap Until the End of 2021

By the end of this year we will finish building a solution that supports engineers building complex products that must comply with regulatory requirements. Semiant will analyze thousands of pages of standards and related documents and summarize the relevant parts to give practitioners contextualized insights. This will accelerate your team’s work, improve the quality and reduce the risk of missing an audit or to receive contractual penalties.

Summarize relevant passages from standards or applicable documents

Timeline

We launched our demonstrator in July and are actively looking for feedback. Please contact us to join the program.

July 2021

Demonstrator

The demonstrator is a “proof of concept” that allows you to evaluate our technology. You can upload a specification and inspect the resulting glossary.

July 2021

August 2021

Web Browser Extension

The browser extension gives you access to your glossary on any web page: Jira, Zendesk, Jama, you name it.

August 2021

Winter 2021/22

Integration

To be valuable in a production environment, we will integrate Semiant with Confluence. This will provide you with an always up to date Glossary for you team.

Winter 2021/22

Spring 2022

Related Information Aggregation

Imagine you have thousands of pages of related documents and standards. You provide a requirement or some text as a starting point, and Semiant will give you summaries of all relevant sections from those documents.

Spring 2022

Roadmap Beyond

We have many ideas on how Semiant can assist practitioners in product development and enable organizational alignment. If you are interested to hear what’s possible, or if you have specific needs that Semiant could address, let’s have a conversation.

We encourage our users to help us shape the future of Semiant. Please join the program so that we can take your feedback into account.

Categories
AI Systems Engineering

AI’s Role in Accelerating Product Development (Free Cutter Report)

We have been developing products more or less successfully for centuries. There have been several disruptive innovations during that time, like the production line the rigorous approach of systems engineering and the introduction of electronics and now software.