top of page

AI Search Builder

Crunchbase gives users access to a plethora of valuable data – financial, investment, and growth insights for over 3,000,000 companies, 280,000 investors, and more – but archaic filtering and search patterns make this data hard to find and sift through.

I incorporated AI and natural language processing to help users search in a streamlined, natural way.

Context

Crunchbase is a company data platform for salespeople, investors, and CEOs to find and close business.

My Impact

284% increase in usage

10% increase in trial starts

4% increase in revenue

My role

Product design

UX research

Collaborators

Product manager and tech lead 

UX researcher

Thumbnail (1).png

Filters of the past

Crunchbase contains a vast amount of useful data on companies, investors, and funding rounds, but users are required to sift through hundreds of filters to search for the data they need. When users find what they're looking for, it is incredibly valuable – Crunchbase users that create a search with 3 or more filters and then save or export those search results are more likely to retain.
 

However, we were seeing slipping retention. For years, users told us about the difficulties they faced with Search: identifying which filters they would need, finding those filters, and updating them once applied. The entire experience was clunky and in desperate need of a refresh.

In a nutshell:

psychology_alt_50dp_2F67F5_FILL0_wght400_GRAD0_opsz48.png

Users know their data need, but they don't know how to translate it into filters

action_key_50dp_2F67F5_FILL0_wght400_GRAD0_opsz48.png

Finding and refining selected filters is difficult because of cluttered UI

sentiment_sad_50dp_2F67F5_FILL0_wght400_GRAD0_opsz48.png

This is where I come in: our data team had been developing an AI-filter builder, and my scrum team and I were tasked with incorporating the technology into Advanced Search.

Without the right filters selected, users don't find or act on useful results

(and eventually, abandon Crunchbase)

Building context

My first step was understanding our existing technology, human behavior, and their implications on our new solution.

Advanced Search

Users selected between hundreds of filters and two separate search experiences to create a curated list of companies

AI filter builder

The filter builder can turn any user-generated question or command into a selected set of filters. It was accurate, but not all the time.

Humans search iteratively

When someone is searching, they start with a question, find the answer, and then use that context to further refine and specify their question. This cycle continues until they are satisfied with the answer.

This led my PM and me to our initial hypothesis, which shaped the first round of designs: users would naturally gravitate toward either the AI-powered filter builder or the manual filters, choose their preferred path, and continue using that consistently.

Iteration 1: Designing for the split

To test our hypothesis, I designed a clear split between AI and manual filters, but with an emphasis on the AI filter builder. Users could engage with AI or pick manual filters, and to support them refining their search, selected filters appeared just above the results.

​I partnered with our front-end lead and researcher to build and test the design. Feedback was mixed: some participants found the search process faster and easier with AI, while others struggled with prompt writing and questioned the AI’s reliability. Overall, users welcomed the addition of AI into the process, but needed more guidance and clarity between the two paths, which led to our next iteration.​​

Iteration 2: Building confidence

There were two clear themes from our research that I implemented into V2 of the search layout:

Build confidence by example

Iteration 2 put examples of AI queries front and centers so users would feel confident writing their first searches.

Relate AI and filters

I introduced the AI input and filters in the same module to re-iterate how AI queries led to filters being selected.

With confidence in our new direction, my PM, eng lead and I worked closely to develop a beta program to test the design for 6 weeks.

Beta test success

At the end of the program, I scheduled calls with 14 beta participants for my PM and I. After using the design for 6 weeks, our participants imparted some incredibly insightful feedback.

FINDING 1

Users that aren't comfortable with filters generally loved AI search

Power users still felt like it was more efficient for them to make manual searches, and at times didn't trust our AI, but they all preferred the beta over the original.

FINDING 2

Users really, really want us to improve the experience around engaging with results.

Through these user calls, my PM and I formalized a user journey that helped us ideate beyond query creation.

The user journey that we teased out from our research calls influenced the rest of the design process for this work.

edit_50dp_4E6EED_FILL0_wght400_GRAD0_opsz48.png
Line 489 (1).png
check_50dp_4E6EED_FILL0_wght400_GRAD0_opsz48.png
Line 489 (1).png
table_chart_view_50dp_4E6EED_FILL0_wght400_GRAD0_opsz48.png

Find

"I want to effortlessly find a targeted group of companies based on specific criteria"

Verify

"I want to ensure the results are accurate and relevant to my desired criteria"

Analyze

"I want deeply understand trends, patterns, and insights from my search results"

EXPERIENCE NEEDS

Fast, accurate to mental model, discoverable

EXPERIENCE NEEDS

Configurable, scannable

EXPERIENCE NEEDS

Understandable, actionable

Iteration 3

Now, informed about users' natural workflows, I used hierarchy to enforce a user journey that led our beta participants to success. By moving the AI search input above the filters, I encouraged users to start their search with the AI filter builder, and then after reviewing the results, making small tweaks to their search by manually removing or adding selected filters. To augment the Find job to be done, we included an auto-filter collapse on scroll. 

The design entered an A/B test for 6 more weeks, and then we evaluated our success.

Letting AI do the lifting

After success in our A/B test, leadership greenlit our design for full release. With some visual redesign and plenty of follow-up tasks, AI Search launched on Crunchbase.

The key ingredients for my design process:

neurology_49dp_2F67F5_FILL0_wght400_GRAD0_opsz48 (1).png

Crafting the user journey around evolving AI capabilities

repeat_49dp_2F67F5_FILL0_wght400_GRAD0_opsz48.png

Understanding the iterative ways people ask and refine questions

edit_note_49dp_2F67F5_FILL0_wght400_GRAD0_opsz48.png

Balancing executive feedback with user needs

Countless Figma iterations, one beta test, 24 user interviews, and an a/b test later, AI search builder launched in February 2025. So far, we have seen

↑ 284% increase

in Search usage

↑ 7.6% increase

in search results saved or exported (both linked to retention)

↑ 10% increase

in trial starts for Crunchbase's paid plan

12.5x

how much more likely Search users are to return to Crunchbase in a week

bottom of page