OVERVIEW

Directing CRC applications to tailored roles

Directing CRC applications to tailored roles

Directing CRC applications to tailored roles

Clarifying the path to the right CRC role

Emory University’s Principal Investigators required Clinical Research Coordinators with specific skill sets, yet varying and unclear role definitions led applicants to apply for multiple CRC positions indiscriminately.

This resulted in an overwhelming volume of unsuitable applications, making it difficult for PIs to find the right candidates quickly. By clearly defining role requirements and guiding applicants toward appropriate opportunities, the recruitment process is streamlined, ensuring better matches between talent and needs.

Role

Shaped the product from 0-1. Prototyped micro-interactions and worked closely with engineering to execute the details.

Team

1 Project Manager, 5 Developers

Timeline

Dec 2022 - Feb 2023 (Launched in February, 2023)

PROBLEM & APPROACH

Over-application for mismatched experience levels

Over 90% of applications are not qualified for the role

Principal Investigators (PIs) at Emory University face a unique challenge: the specific needs for Clinical Research Coordinator (CRC) positions vary greatly across research projects. This variability often confuses applicants, leading many to apply for multiple CRC levels indiscriminately and resulting in an excessive number of applications. Addressing this issue is key to simplifying the recruitment process.

Image: Applications & Positions for CRC I, II, III and IV at Emory University in 2022

A platform to guide applicants to apply for the right position

Talent Trace simplifies the CRC job application process. Through a series of tailored questions, it helps users identify the most appropriate CRC positions for their skills and aspirations. This system also offers resources for higher-level opportunities.

Product Alignment with Clients

Collaborate with clients to define the screening logic

How to screen candidates for different CRC levels?

I developed a flow chart to streamline candidate screening for CRC levels I to IV, leveraging key metrics like education, work/research experience, and certifications (provided by clients). This visual tool not only enhanced communication with clients but also effectively aligned their expectations with the logic underpinning our product development.

How to navigate users along the assessment tool?

Talent Trace, an MVP product independent from Emory University's icims job application system, empowers users to accurately identify suitable CRC levels for application. Future integration plans include embedding Talent Trace into the job application process, where it will automatically screen applicants and provide tailored recommendations.

MVP Prototype & Pilot Test

User research and pilot test

How to guide applicants to apply for the right CRC level with confidence and ease?

I created rapid prototypes of the MVP product to test with users. We recruited and tested with 5 Emory University CRC I professionals. The test aimed to refine design elements for user experience, determine motivations for assessment participation, and ensure clarity and trust in the results.

1. Decide the feeling to convey through design

2. Identify motivators for assessment participation

3. Ensure trustworthiness of results

4. Ensure clarity and usability in question format

Major findings and insights

The user test revealed important insights, despite minimal usability issues due to familiar product format.

Balancing efficiency with engagement

While users prioritize quick assessment completion, they also showed interest in features that offer more than just efficiency. Adding elements of enjoyment can enhance the overall experience in their job-hunting journey.

Optimizing selection processes

Users faced challenges with the overwhelming options in dropdown lists, suggesting a need for a more streamlined selection process. This will ensure ease of use and accuracy in their responses.

Transparent results and justifications

A critical finding was the need for transparency in the assessment outcomes. Users wanted to know why they were deemed qualified or not, which is essential for building trust in the product and preventing unqualified applications.

Design Explorations & Iterations

How to help users apply with ease and confidence 👀

Balancing efficiency with engagement

A critical aspect of the design involved determining the optimal arrangement of sections and questions. I explored and tested 3 layouts and interactions, tailoring them to user needs ranging from efficiency to ease of use.

Option A

Option A

Option A

Vertical layout

The vertically scrollable form with a left-side navigation bar. All questions in a section are listed on one page.

Maximum efficiency

Matches users’ expectations of job application product formats

May appear more lengthy despite its efficiency

Increases the risk of missing questions

Option B

Option B

Option B

Horizontal cards

Questions in each section are displayed on a horizontally scrollable page, navigable via left and right arrows.

Enhances concentration on one item at a time

Works well with the multiple sections structure of the survey

Requires more clicks to complete

Users may not be accustomed to this mode and interaction

Option C

Option C

Option C

Vertical layout with screening questions

Begins with screening questions for each section, focusing users on these before transitioning to a standard vertical scroll for subsequent questions.

Offers an efficient yet easy experience

Enables users to answer screening questions first - then delve deeper or skip

Resonates with general online form expectations, though differs from typical job application layouts

Adapting components with numbers of options

To address the challenge of selecting from diverse length of option lists, I implemented components including checkbox/radio buttons, dropdown and listbox for different number of options:

1. < 5 options: radio buttons or check boxes
2. 5 - 15 options: dropdown
3. > 15 options: list box (allow users to type to search for options)

Option A

Option A

Option A

Dropdown

Best for under 5 - 15 options, offering a space-saving solution. However, they become less user-friendly for lists exceeding 15 items due to navigation and selection challenges.

Ideal for fewer options - save space

Challenging to browse, add, or remove items in longer lists

Prone to being closed unintentionally

Option B

Option B

Option B

Listbox

Users can click on the items enclosed in the container box to select one or many from the list - without a need to click anything to reveal options.

Users don't need to click on anything to reveal options inside

Allow users to see multiple options at once

The search function allow users to find options quickly in a long list

Quickly add skills

Users don't need to select "Other" to add options that aren't in the list. They can simply type to add it.

Clarifying result justifications

User feedback indicated that assessment outcomes (qualified or not for CRC positions) lacked persuasiveness. The key issue was the absence of clear explanations for these results. To improve user trust and understanding, it’s essential to incorporate detailed reasons behind each outcome, directly addressing users' queries about their assessment performance.

Option A

Option A

Option A

Result - Qualified or not

Best for under 5 - 15 options, offering a space-saving solution. However, they become less user-friendly for lists exceeding 15 items due to navigation and selection challenges.

Ideal for fewer options - save space

Challenging to browse, add, or remove items in longer lists

Prone to being closed unintentionally

Option B

Option B

Option B

Explainary result

Users can click on the items enclosed in the container box to select one or many from the list - without a need to click anything to reveal options.

Users don't need to click on anything to reveal options inside

Allow users to see multiple options at once

The search function allow users to find options quickly in a long list

Goals of the pilot test

Responsive design implementation

Considering job applications via mobile phones, the prototype is responsive to various screen sizes, including desktops and mobile devices. I established breakpoints for each and provided the development team with a responsive grid system.

Handoff showcase

This is a showcase of how I handoff the design spec to the engineer team.

Learnings and Reflections

Goals of the pilot test

Micro-interactions matter in UX design

Even in simple products like an assessment tool, the details matter. Standard formats often overlook specific user needs. Thoughtful design decisions — from navigation placement to required field indicators — enhance user experience. Micro-interactions play a crucial role in communicating system status and guiding users effectively.

Be transparent with your users

Transparency fosters trust and respect, particularly in AI-driven products. Clearly explaining product logic and algorithms is crucial. While business constraints may limit full disclosure, translating complex logic into user-friendly content is essential for clear communication and understanding.

@2024 Jayce Kong — Thanks for visiting :)

kwngzy@gmail.com

@2024 Jayce Kong — Thanks for visiting :)

kwngzy@gmail.com

@2024 Jayce Kong — Thanks for visiting :)

kwngzy@gmail.com