Skip to Content

Implicit Knowledge: The Hidden Key to Effective Gen AI

Antoine Aymer
Sep 17, 2024

In the world of generative AI (Gen AI), the true challenge lies not just in providing clear and rigorous prompt instructions (explicit knowledge) but also in applying the obvious reasoning and logic embedded within the tasks (implicit knowledge).

This is crucial for enhancing the performance and efficiency of large language models (LLMs), like those integrated in the Gen AI Amplifier for Software & Quality Engineering, a platform that accelerates and improves the effectiveness of quality engineering for their applications from the start – and at every step of the way. ​This article delves into the importance of both explicit and implicit knowledge, illustrating these concepts through practical examples from the Gen AI Amplifier platform. 

Explicit vs. Implicit Knowledge: Definitions and Academic Insights 

  • Explicit Knowledge refers to information that is easily articulated, documented, and shared. This includes clear and rigorous prompt instructions, facts, procedures, and guidelines that can be directly expressed through language. 
  • Implicit Knowledge, on the other hand, encompasses the subtler, often subconscious understandings and skills acquired through experience. This type of knowledge is challenging to quantify or formalize and includes insights like problem-solving techniques, contextual nuances, and the application of reasoning and logic in specific situations. 

In academic terms, Polanyi (1966) famously stated, “We can know more than we can tell.” Polanyi’s paradox, named after the British-Hungarian philosopher Michael Polanyi, proposes that a significant portion of human knowledge and our understanding of the world and our abilities remain beyond our explicit comprehension. This highlights the nature of implicit knowledge—much of what we understand and use in our daily tasks is not easily articulated. 

The Importance of Implicit Knowledge in LLMs 

While LLMs excel in processing explicit knowledge due to their training on vast datasets, they often struggle with tasks that require implicit knowledge. This limitation is evident in scenarios where nuanced understanding and contextual awareness are critical. 

Example: Generating Effective Test Cases 

Consider an example involving a “book-ordering system.” Here, the parameters have multiple equivalence classes, and an additional parameter, “Ordering period,” has been added: 

  • Number of books: 1; 2-8; >8 
  • Sum: <100; 100-250; >250 
  • Membership card: None; Silver; Gold; Platinum 
  • Ordering period: Weekday; Weekend; Public holiday 

We ask ChatGPT 4o to “generate test cases” for this scenario, and it displays 108 test cases to cover all possible combinations. 

Implicit knowledge applied 

  • It recognized the parameters and their values, and it would have done the same with constraints. 
  • It adjusted responses based on the understanding of book-ordering systems and typical test scenarios. 
  • It used common patterns to structure the test cases logically. 
  • It filtered out irrelevant combinations to focus on meaningful test cases. 
  • It applied the default approach of generating the maximum number of test cases to cover all combinations​.​ 

Implicit knowledge missed 

Implicitly, ​​​​Gen AI should have​:​ 

  • Recognized the input as eligible for data combination testing and ask about the business and risk value first, determining whether the test level is low, medium, or high. Do not assume high risk by default. 
  • Implicitly recognized that this scenario is suitable for data combination testing. 
  • Identified medium-level testing as suitable for pairwise testing. 
  • For each generated test case, there should be a clear expected result to compare against the actual outcome. 
  • Recognized its own limitations and inform the tester that traditional code is more accurate and cost-effective for generating pairwise test cases. Gen AI should not be used for this purpose. 

Proactive Strategies to Address Gaps 

Before merely generating test cases, document the testing process and break it down into chunks. Have generative AI focus on pre-steps, with clarifying questions about the business and risk value of the input before generating test cases. 

Implement a “Test Advisor” to guide users in selecting appropriate test design techniques based on the context and requirements. The “Test Advisor” can prompt users with questions to determine the most suitable testing method. 

Clearly delineate which parts of the process should be handled by Gen AI and which by traditional code. Use Gen AI for initial data combination suggestions and traditional code for precise pairwise testing generation. 

Enable users to interact with the system to integrate deeper contextual knowledge and typical testing scenarios. Users can provide feedback and additional context to refine test case generation. 

Conclusion 

Implicit knowledge is a cornerstone of effective AI utilization, transforming basic responses into deeply informed, contextually aware solutions. Through the Gen AI Amplifier, we’ve experienced the profound impact of integrating both explicit and implicit knowledge into our AI systems. 

By iteratively refining our prompts and embedding domain expertise, we can unlock the full potential of LLMs, ensuring that they not only meet explicit requirements but also address the nuanced challenges of real-world applications. This approach ​both​ enhances AI performance ​​​and​ also drives greater efficiency and accuracy in software development processes, paving the way for more reliable and innovative solutions. 

As Polanyi might say in the context of generative AI, “We can know more than we can instruct.” This underlines the necessity of having an expert in the loop to harness both explicit and implicit knowledge effectively. 

Quality reimagined.

Exceed your expectations by delivering high-quality software quickly. By harnessing the power of our Gen AI Amplifier, which speeds up critical steps across the software life-cycle. You can achieve more, get to market faster, and maintain end-to-end quality. 

Know more

About the author

CTO at Sogeti DA&T | France
Antoine Aymer is the Market leader supporting 15+ countries in delivering revenue and contribution target. He manages global alliance with key software vendors. He is the General Manager of Cognitive QA platform.

Leave a Reply

Your email address will not be published. Required fields are marked *

Slide to submit