GEP Qi · Conversational AI

Teaching AI to walk someone through a process

A buyer needs to change a purchase order — increase a quantity, extend a delivery date, adjust pricing. Simple enough, right? Except they don't know their PO number. Or they have it, but not the contract it's tied to. The system expected them to know exactly where to go. That's not a workflow. That's a maze.

Timeline
~1 month
Role
Sole Designer
Stakeholders
PMs, Cross-functional, Leadership
Platform
GEP Qi · Enterprise AI

The system demanded expertise. The user just had a question.

Purchase order amendments are one of the most common tasks in procurement. Something changes — quantity, price, delivery date, a new line item — and the buyer has to update the PO. In theory, straightforward. In practice, the existing flow assumed the user already knew everything: which PO to amend, where to find it, how the system organized contracts versus purchase orders, and which screens to navigate through.

The people using these tools are specialists in procurement, not specialists in the software. They know their contracts, their vendors, their budget codes. But they don't always know that to amend a PO, they need to go to Manage, then Purchase Orders, then Search, then Open, then click Create Amendment, wait for the form, fill in the changes, and submit for approval. That's seven steps of system navigation before they even start doing the actual work.

01

Navigation overload

Amending a PO required navigating through multiple screens. Users had to hold a mental map of the system's architecture just to complete a routine task.

02

Multiple entry points, no guide

A user might start from a contract, from a PO, or from nothing at all. The system had no way to meet them where they were.

03

Manual data entry

Even when users found the right PO, they had to manually fill in amendment details across dense forms that offered no intelligent defaults or pre-filled context.

04

No visibility into "why"

The system processed the amendment but never explained what was happening behind the scenes. Users submitted changes and hoped for the best.

Before

Navigate to PO module, search for PO, open it, understand which fields to edit, fill amendment form, submit, hope you didn't miss anything. Requires system expertise + time.

After

Tell the AI what you need in plain language. It figures out the PO, extracts the details, pre-fills the form, shows you what it did and why, and lets you review before submitting.

Three users, three contexts, one conversation

The fundamental insight was that users don't come to this task the same way. Some know exactly which PO they want to amend. Some know the contract but not the PO. And some just know they need to change something and have a document or a vague memory to work from. Any solution had to handle all three gracefully — without forcing the user to figure out the system's taxonomy first.

Use Case 1

"I have the contract number"

The user knows the contract but not which PO to amend. The AI retrieves the contract, finds associated POs, and lets the user pick.

Most common
Use Case 2

"I have the PO number"

The user knows exactly which PO to change. The AI pulls it up immediately and asks what they want to modify.

Fastest path
Use Case 3

"I don't have either"

The user has a document, an email, or a vague description. The AI uses file upload and data extraction to identify the right PO.

Most complex

Prototype loading…
If it doesn't appear, open it directly.

Open in Figma
Interactive prototype. Click through the conversational flows — contract # path, PO # path, and open-ended path with file upload. Open in Figma

Designing conversation, not screens

This project was different from anything I'd designed before. The primary interface wasn't a dashboard or a form — it was a conversation. Conversations have a fundamentally different design grammar than traditional UI. There's no fixed layout. The content is generated, not predetermined. The user's path through the experience depends on what they say, not where they click.

GEP Qi was still in its early stages when I started. I wasn't just designing for a mature platform — I was helping shape what AI-assisted procurement interaction would feel like.

Understanding the conversation map

Before any visual design, I mapped out every conversational branch. If the user says "I want to amend a PO," the AI needs to ask: do you have a PO number? If no, do you have a contract number? If no, do you have a document? Each answer creates a different path, and each path has to feel natural. I worked through 35+ screens worth of dialogue trees, edge cases, and fallback paths.

Designing the conversational UI patterns

Conversational AI needs more than chat bubbles. I designed a system of response patterns: text answers, inline forms embedded within the conversation, file upload cards for extracting PO details from documents, and action confirmations before submission. Each pattern had to work independently and flow naturally from one to the next.

The hardest decision: AI transparency

When the AI extracts data from an uploaded document and pre-fills an amendment form, the user needs to trust that it got it right. How do you show an AI's reasoning without overwhelming the user? I landed on a two-tab approach: "Answer" shows the final result. "AI Process" shows the work — what was extracted, how data was matched, which fields were pre-filled and why. Transparency as an opt-in, not a wall of text.

Balancing guidance with autonomy

There's a fine line between an AI that guides you and one that decides for you. The design gives the AI enough intelligence to pre-fill forms and suggest next steps, but always surfaces a review moment before any action is taken. The AI proposes. The human decides.

Stakeholder alignment

Close collaboration with PMs who understood procurement end to end, cross-functional teams shaping the Qi platform's AI capabilities, and leadership who had a vision for where conversational AI fit in the product strategy. The transparency layer went through multiple iterations based on feedback about what felt trustworthy versus what felt like a black box.

A conversation that does the work for you

The final design is a conversational interface within GEP Qi that walks the user through a PO amendment from start to finish — regardless of where they start. It's not a chatbot that answers questions. It's an AI collaborator that understands the task, gathers the information it needs, does the heavy lifting, and lets the user review and confirm.

Buyer
I need to amend a purchase order. I have the contract number — C-2024-0847.
GEP Qi
I found contract C-2024-0847 with Meridian Technologies. There are 2 active POs under this contract. Which one would you like to amend?

→ PO #4500021783 — $128,400 — IT Services
→ PO #4500021791 — $47,200 — Cloud Infrastructure
Buyer
The IT Services one. I need to increase line item 3 quantity and extend the delivery date.
GEP Qi
Got it. I've prepared the amendment for PO #4500021783. Please review the changes below before I submit for approval.

Inline forms replace page navigation. Instead of redirecting to a separate amendment form, the form lives inside the conversation. The AI pre-fills what it can — current quantities, existing line items, contract references — and highlights the fields the user needs to update.

File upload as an input mechanism. For the "I don't have a number" case, the AI accepts document uploads — scanned POs, vendor emails, amendment requests. It extracts the relevant data, matches it against the system, and pre-fills the amendment. Particularly important for users who work more with documents than the system's internal reference numbers.

Prototype loading…
If it doesn't appear, open it directly.

Open in Figma
File upload flow. Document extraction, data matching against PO records, and pre-filled amendment form. Open in Figma

The Answer / AI Process tabs. Every AI response has two views. The "Answer" tab shows the clean result — the amendment form, the summary, the next action. The "AI Process" tab shows what happened behind the scenes: which document was parsed, what was extracted, how data was matched, and the confidence level for each pre-filled field.

Answer
AI Process

Amendment prepared for PO #4500021783

Line item 3 quantity updated from 50 to 75. Delivery date extended to March 2025. Revised total: $142,600. Ready to submit for approval.

Reasoning trail

Source: uploaded vendor email. Extracted PO reference: 4500021783 (confidence: 98%). Matched line item 3 by description: "Cloud Migration Support Hours." Original quantity: 50. Requested quantity: 75. Price per unit unchanged at $1,680. Delivery extension parsed from email body: "new target March 2025."

The AI should feel like a colleague who did the prep work before the meeting — not a black box that tells you what it decided.

From navigating the system to talking to it

0+
Screens designed
across 3 conversational paths
0
Entry points unified
into one conversation
0
Transparency layers
Answer + AI Process

The conversational amendment flow collapsed what used to be a multi-screen, multi-click navigation process into a single dialogue. Users no longer needed to know the system's information architecture to get things done.

The transparency layer proved to be the most valuable design decision. During stakeholder reviews, the AI Process tab was consistently cited as the feature that made the AI trustworthy. In enterprise procurement, where every PO amendment has financial and compliance implications, "trust but verify" isn't a philosophy — it's a requirement.

Most importantly, this work established a design pattern for conversational AI within GEP Qi that extended beyond PO amendments. The interaction grammar — conversational guidance, inline forms, file-based input, transparency tabs — became a template for other Qi workflows. One case study that shaped an entire product language.

What stayed with me

There's a moment in designing conversational AI where you realize that the conversation is the interface. There's no sidebar to fall back on. No persistent navigation. The user's context is whatever they've said, and the AI's context is whatever it's learned from the dialogue. You're designing a relationship, not a layout.

The hardest question wasn't about visual design — it was about trust. How much should the AI do before asking permission? How much of its reasoning should it show? Too little transparency and the user doesn't trust the output. Too much and you've replaced one kind of cognitive overload with another. The Answer / AI Process split was my answer, but it took multiple iterations and long conversations with PMs to land on that balance.

Krishnamurti wrote about the importance of observation without judgement. I think about that when I design AI interactions. The AI should observe the user's intent, respond to it clearly, and then step back — let the human decide. The moment you design an AI that decides on behalf of the user, you've lost the thread. The AI is the collaborator. The human is the author.

Previous Case Study
← Category, Business Unit & Region