COGITA
CASE STUDY – 1
How an HR Team Reduced CV Processing Time by 90% with AI Automation
Overview
This case study presents how a recruitment company specializing in IT hiring transformed its CV processing workflow using AI, significantly improving efficiency, data quality, and scalability.
The Challenge
A recruitment company focused on IT roles faced a major operational bottleneck: the manual processing of candidate CVs.
CVs varied significantly in length (from one to several pages), formatting, and file types (DOCX, PDF), making automated handling difficult. As a result, recruiters were required to manually extract and input candidate data into the internal recruitment system.
This process was not only time-consuming but also prone to human error, limiting the number of candidates the team could effectively process.
Additional complexity came from the need to:
- translate CVs into multiple languages
- anonymize sensitive personal data
- tailor candidate profiles to specific job roles
In some cases, processing a single CV took up to 60 minutes, creating a significant workload for the HR team and slowing down the recruitment pipeline.
The goal was to dramatically reduce processing time while improving consistency and data quality.
The Solution
To address this challenge, an AI-powered system was implemented using modern multimodal models and large language models (LLMs), capable of analyzing both PDF and DOCX documents.
The system automatically extracts key candidate information from CVs and converts it into a structured JSON format, enabling seamless integration with the existing recruitment system.
In addition to data extraction, the solution includes:
- automated CV translation
- candidate-to-role matching
- anonymization of sensitive data (names, companies, universities, links, etc.)
The system was designed to work with the company’s existing infrastructure, requiring no major changes to internal systems. This allowed for immediate deployment and fast adoption.
The approach ensured scalability, consistency, and a significant reduction in manual workload.
Barriers & Challenges
One of the main challenges was handling the high variability of CV formats. Documents differed significantly in structure, layout, and content quality, making consistent data extraction technically demanding.
Another key challenge was maintaining high accuracy while processing unstructured and multilingual data. Ensuring reliable anonymization and correct interpretation of candidate experience required careful tuning of the models.
Integrating the solution into existing workflows without disrupting ongoing recruitment processes also required careful planning and iterative testing.
Results
The implementation delivered immediate and measurable impact.
The system reduced CV processing time by 90%, dramatically accelerating candidate evaluation and improving operational efficiency.
Additional results include:
- processing of over 2,000 CVs within the first two months
- 98% data extraction accuracy
- significant improvement in data consistency across the recruitment pipeline
As a result, the HR team was able to shift focus from repetitive administrative tasks to higher-value activities such as candidate evaluation, relationship building, and strategic decision-making.
The solution also enabled the company to scale its recruitment processes more effectively.
CASE STUDY – 2
How We Built an AI System to Assess Patient Health from Tongue Images
Overview
This case study presents the development of an AI-powered system designed to support health assessment based on tongue images, drawing on principles from Traditional Chinese Medicine (TCM). The solution enables faster, more accessible preliminary diagnostics while supporting expert decision-making.
The Challenge
The goal of the project was to develop an AI algorithm capable of analyzing tongue images in line with Traditional Chinese Medicine (TCM) practices.
Tongue diagnosis is a well-established method in TCM, allowing practitioners to identify potential health issues based on visual indicators such as coating, discoloration, cracks, teeth marks, or spots. Typically, this process requires analyzing multiple images of the tongue (front, back, and side) and mapping observed features to a set of common TCM syndromes.
However, this process is highly time-consuming and requires deep domain expertise, significantly limiting its availability and scalability. As a result, patients often face barriers in accessing this type of analysis and receiving timely health insights.
The challenge was to automate this process while maintaining diagnostic quality and aligning with expert knowledge.
The Solution
To address this challenge, we collaborated closely with a TCM expert to define a structured set of tongue features that could be automatically detected by an AI model.
We collected and prepared training data, annotating images based on visible symptoms and their corresponding TCM syndromes. Given the nature of the data, this required significant preprocessing and standardization.
We developed convolutional neural network (CNN) models using transfer learning, leveraging pre-trained vision models to improve efficiency and performance. Two architectural approaches were tested:
- A direct classification model predicting TCM syndromes
- A multi-stage model (symptom detection → region analysis → classification), designed to improve interpretability and accuracy
The models were optimized for prediction accuracy, stability, and scalability.
The solution was deployed in the AWS cloud, enabling automatic processing once a patient uploads images. Detected features are highlighted and sent to a TCM expert for validation.
The project required addressing multiple complexities, including limited labeled data, variability in image quality, and inconsistent historical data formats. Previous analyses were extracted from emails, often unstructured, requiring the use of NLP techniques to standardize and interpret textual descriptions.
Barriers & Challenges
Several challenges emerged during implementation.
One of the main barriers was the limited availability of high-quality, labeled data. Some symptoms were rare, making it difficult to build balanced datasets and achieve consistent model performance.
Additionally, image quality varied significantly, affecting model reliability. Some visual indicators were subtle and difficult to label consistently, requiring continuous consultation with TCM experts.
Another challenge was the structure of historical data. Past analyses were stored in email correspondence, with images and descriptions separated and often written in unstructured, free-text formats. This required additional processing and the application of NLP techniques to extract meaningful information.
Finally, adapting CNN architectures to reflect the nuances of medical image interpretation required careful tuning, as well as balancing classification performance across multiple syndrome categories.
Results
During the proof-of-concept phase, the CNN models achieved 80–90% accuracy in predicting TCM syndromes, along with high effectiveness in detecting individual tongue features.
The system has been successfully integrated into the workflow of a TCM expert, significantly accelerating the analysis process and improving efficiency.
The solution provides a strong foundation for further development. With additional data and continued optimization, the system is expected to reach up to 99% accuracy in the future.
In the long term, once sufficient reliability is achieved, the system can operate fully autonomously, delivering analysis results directly to patients without requiring expert validation.
CASE STUDY – 3
How AI Can Improve Grant Writing — and Make It Actually Work
Overview
This case study presents the development of an AI-powered assistant designed to support enterprises in writing high-quality grant applications, particularly in innovation and green energy sectors.
The solution helps organizations navigate complex requirements, structure compelling proposals, and significantly reduce the time needed to prepare funding applications.
The Challenge
For large enterprises applying for grants — especially in innovation-driven and green energy sectors — the application process is highly complex and resource-intensive.
Writing a successful grant proposal requires not only deep domain expertise but also a strong understanding of how to:
- define objectives in line with formal requirements
- structure a compelling narrative
- use the right keywords to maximize evaluation scores
A major challenge is also interpreting the application criteria themselves and accurately assessing whether a company meets the requirements.
This creates a significant barrier, as even well-qualified organizations may struggle to effectively present their projects. As a result, the process is time-consuming, difficult to standardize, and often limits the number of applications a company can realistically prepare.
The Solution
To address this, we developed an AI-powered grant-writing assistant based on an agentic architecture using the Langflow framework.
The system integrates large language models (LLMs) to both generate structured content and process complex grant documentation.
To ensure data privacy and enterprise readiness, we used open-source models that can be deployed on local infrastructure, giving organizations full control over sensitive information.
The assistant includes several key functionalities:
- keyword and phrase generation aligned with grant requirements
- automatic creation of SMART objectives
- generation of executive summaries that can be expanded into full proposals
- structured support for building consistent and persuasive narratives
The system features a simple user interface, enabling fast onboarding and testing during the Proof of Concept phase.
It is deployed using AWS Bedrock, allowing efficient model management and optimization in terms of both cost and performance.
The result is a modular, scalable tool that automates critical stages of the grant-writing process and integrates easily with existing enterprise systems.
Barriers & Challenges
One of the key constraints was the requirement to use open-source LLMs, which typically offer lower performance compared to proprietary models — both in terms of output quality and context window size.
This was particularly challenging given that grant documentation can exceed 100 pages.
To overcome this, we designed a multi-step agent-based architecture that breaks the process into smaller, manageable stages:
- keyword generation
- executive summary creation
- SMART objective definition
- a dedicated peer-review module, using a separate LLM to evaluate and refine outputs
Another challenge was the lack of high-quality reference materials. While many grant calls are publicly available, successful example applications — especially in the relevant domain — are rarely accessible.
This required extensive research and careful validation of generated content.
Results
During the Proof of Concept phase, the solution was evaluated very positively by the client’s grant-writing experts.
One of the most valuable features proved to be the automatic generation of relevant keywords and phrases, significantly improving the quality and alignment of applications with formal requirements.
We estimate that the solution can reduce the time required to prepare a grant application by up to 50%, while also increasing the likelihood of success.
This is achieved through:
- better-defined objectives
- improved alignment with evaluation criteria
- more structured and persuasive narratives
Overall, the system enhances both efficiency and effectiveness, enabling organizations to scale their grant application efforts while improving quality.
CASE STUDY – 4
SimSale AI — A New Standard for Sales Training in Real Estate
Overview
This case study presents SimSale AI — an AI-powered solution designed to transform how real estate agents are trained. By enabling realistic sales simulations, the platform helps agents build skills faster, improve performance, and reduce onboarding time.
The Challenge
Real estate agents — especially those at the beginning of their careers — often spend hundreds of hours learning through trial and error. This results in inefficient meetings, lost opportunities, and slow skill development.
The industry lacked a tool that would allow agents to practice real sales conversations in a safe, controlled environment.
From a business perspective, this created several challenges:
- learning “on real clients,” leading to wasted leads and time
- lack of habit formation and standardization after training
- long and costly onboarding processes for junior agents
- unpredictable performance, often dependent on a few top performers
- high turnover within the first 1–2 months
- limited visibility into team performance and learning progress
Managers had little insight into who was improving, what was not working, and when intervention was needed.
The Solution
To address these challenges, SimSale AI was developed as an interactive training platform based on AI-driven voice simulations.
The system enables agents to engage in realistic, dynamic conversations with AI-powered clients, allowing them to practice various sales scenarios in a safe environment.
Key features include:
- real-time voice-based interaction with AI
- simulation of different client personas (cooperative, neutral, difficult)
- ability to practice both simple and complex sales scenarios
- scalable training sessions without the need for real clients
The solution leverages advanced AI technologies, including real-time conversational models, to deliver fast, responsive, and natural interactions.
After testing multiple approaches (including ElevenLabs and LiveKit), the final implementation was based on OpenAI Realtime, which provided the best balance of responsiveness, realism, and stability.
The result is a flexible and scalable training tool that enables consistent skill development across teams.
Barriers & Challenges
The main challenge was achieving a high level of realism in conversations.
The system needed to:
- produce natural, human-like voice responses (not robotic)
- respond instantly to user input without lag
- maintain engagement and avoid repetitive or “boring” interactions
- support a wide range of scenarios, from easy to highly challenging
Achieving this required extensive experimentation with different technologies and architectures.
Selecting the right solution for real-time interaction was critical, and the project evolved alongside rapidly developing AI tools, ultimately benefiting from the release of OpenAI Realtime.
Results
The solution delivered strong and measurable outcomes:
- 85% of users rated the experience as positive or very positive
- 86% considered the conversations realistic (average score: 8.13/10)
Additionally:
- onboarding time for new agents was significantly reduced
- costs related to inefficient meetings and travel decreased
- agents were able to conduct 10–20 training simulations per week, compared to only 3–5 real client meetings
This allowed for faster skill development, better preparation, and more consistent performance across teams.
CASE STUDY – 5
How AI Is Transforming Student Assessment in the UK
Overview
This case study presents an AI-powered solution designed to support universities in verifying student knowledge and practical skills in the era of widespread AI usage.
The system enables scalable, automated assessment while maintaining a high level of academic rigor and personalized feedback.
The Challenge
In the era of AI-driven tools, traditional methods of student assessment are facing significant challenges.
Students are increasingly using AI to generate assignments within minutes, making it difficult for universities to accurately evaluate their true level of understanding and independence.
At the same time, leading universities often enroll hundreds of students per program, making it impossible to dedicate sufficient time to individually assess each student’s knowledge.
The problem is even more pronounced in practical fields, where success depends not only on theoretical understanding but also on the ability to apply knowledge in real-world scenarios.
As a result, universities struggle to ensure fair, effective, and scalable assessment processes.
The Solution
To address these challenges, we developed an AI-powered voice-based assessment system designed to simulate real academic evaluation.
The solution works by:
- analyzing a student’s submitted work
- incorporating the course curriculum and learning objectives
- conducting an interactive voice-based assessment
The AI acts as an examiner, asking targeted questions to verify the student’s understanding and independence in developing their work.
After the session, the system generates structured feedback, highlighting:
- strengths
- weaknesses
- areas for improvement
Beyond assessment, the tool also helps students improve:
- clarity of communication
- fluency in expressing ideas
- confidence in presenting knowledge
The approach was designed to replicate the experience of an oral examination with a professor, while remaining scalable and consistent.
Barriers & Challenges
The main challenge was designing the system to closely resemble real academic interactions.
This required developing highly detailed and precise instructions for the AI model, ensuring that:
- questions are relevant and aligned with the curriculum
- feedback is accurate, constructive, and educational
- the interaction feels natural and engaging
Achieving a balance between automation and academic quality was critical, particularly in maintaining credibility within a university setting.
Results
The solution has already been tested by dozens of users and received very positive feedback.
Early results indicate strong user satisfaction and high perceived value in both assessment quality and feedback usefulness.
The system is currently being integrated into the IT infrastructure of a university in the United Kingdom, marking a key step toward real-world deployment.
This approach has the potential to significantly improve how universities assess student knowledge — making the process more scalable, reliable, and aligned with the realities of AI-driven education.