- Talent & Leadership
The exams we deliver at Pearson VUE have a direct and positive impact on communities around the globe, driving progress and helping our clients deliver on the promise of their industries. In this series, we’re taking a deeper look at the ways we make that happen, by speaking to people from around our business who are making a lasting impact in a particular area of assessments.
This time we’re speaking to our Principal Psychometrician, Dr. Edward Feng Li, who is based in our the Asia-Pacific (APAC) region.
With a PhD in Education from the University of New South Wales, Australia, Dr. Li applies educational psychology and quantitative methods to improve how assessments are designed and analyzed for test-owners.
In his own words, "Psychometrics is the science behind assessments," and he’s excited about our company’s adoption of technologies such as AI to enhance customer offerings, automate processes, and drive efficiencies.
Dr. Edward Feng Li
Principal Psychometrician, Australia and Southeast Asia region
Dr. Li, please tell us a bit about yourself. We’re keen to know about your PhD background.
I have a PhD in Education with a focus on psychology and quantitative methods.
I benefited from diverse expertise in my academic faculty, which has provided me with in-depth knowledge of many areas of this technical discipline. I’ve also received training in qualitative research methods which has further broadened my skillset as a researcher. It was fun to apply this knowledge to develop assessment instruments for my PhD research projects.
My academic background has equipped me with a robust analytical framework and a comprehensive understanding of the complexities involved in the science of measurement and its application across both educational and professional settings.
You’ve recently taken up a new role at Pearson VUE of Principal Psychometrician. What does that involve exactly?
As the Principal Psychometrician at Pearson VUE, my role is multifaceted. However, I would say my primary focus is always striving to ensure the validity and reliability of our assessment solutions and how they can be customized to meet a diverse range of client needs. Now wearing the “Principal” hat, I feel a greater sense of responsibility — not just in nurturing other members of my team, but in shifting our focus from not only the operational day-to-day work, but also more strategic considerations. I’m now more involved in collaboration across other functional groups within Pearson VUE like technology and product development, to further enhance the full capabilities of our testing services. Pearson VUE sets itself apart not just by following industry standards but through being a thought leader and pioneering innovative approaches that define best practices in assessment.
What does a typical day look like for you?
Each day is different. One day you might find me conducting data analysis, another day, crafting assessment solutions for a client, or collaborating with other Pearson teams. Over a longer timeframe, it’s a blend of strategic planning, team collaboration, and hands-on analysis. I also like to block out time to catch up with the latest developments in psychometric methodologies and technology — particularly Generative AI.
We imagine you collaborate with a range of different stakeholders both internally and externally when developing assessments. Can you tell us a bit about this process?
Collaboration is at the heart of developing effective assessments. Externally, I engage with clients to understand their unique needs and challenges. This is crucial because “textbook-style” practices are not going to suit every test owner. Since clients often don’t have in-depth knowledge, I have to find ways to explain key principles in a way that a lay person can understand, and so they’re not overwhelmed. A strong and strategic plan can only be drafted when everyone is on the same page. Internally, it often involves working closely with the business development team to have an initial assessment of clients’ needs, followed by inputs from our program management, content development, and test publishing teams to ensure that our assessments are well planned, well developed, and delivered seamlessly to our clients. This collaborative process allows us to tailor our solutions, ensuring they are effective and practical.
To you personally, what is the importance/value of a highly skilled psychometric team?
Psychometrics is the science behind assessments. A highly skilled psychometric team brings a critical eye to test development, ensuring the assessment is fair, valid, and reliable. Our expertise enables us to navigate the complexities of measuring latent constructs accurately across diverse populations. This includes defining the construct that needs to be assessed and ensuring that test items effectively evaluate test-takers' understanding of that construct. In addition to technical skills, at Pearson VUE, we also emphasize the “soft” skills required by the psychometric team. For example, having great communication skills is key to understanding a client’s specific needs and helping them to understand how our technology solutions can improve their assessment. Strong communication skills foster the growth of long-lasting relationships with clients. Moreover, soft skills are also critical when it comes to the test development process — that element of preparing an assessment involves a lot of human interaction. For example, facilitating standard setting and job task analysis workshops.
What advice do you give to test owners/clients striving to maintain the high standards of their credential?
It's essential to routinely review and update assessments to reflect the latest changes in both subject matter and psychometric practices. If a client is ever unsure about the appropriate course of action, it’s crucial to first consult with a psychometrician. Sometimes, the effort and resources required to implement a textbook-style assessment may seem overwhelming but remember: “Don’t let perfect be the enemy of good.” Any small improvement counts!
Southeast Asia is a key market for our business with some exciting new launches coming up. Within this dynamic region, from where do you see most demand for psychometric services?
As a psychometrician based in Melbourne, Australia, I’ve witnessed the fast growth in this region in recent years. More and more organizations require robust assessment methods to ensure the competence of certified candidates and to safeguard the credibility of their certification programs. Pearson VUE’s expertise in computer-based testing and its global reach will continue to appeal to clients in this region as they grow and expand their exam programs.
With so much emphasis on AI at the moment, how do you see measurement methodologies evolving over the next few years?
It’s a tricky question as experts often get predictions about the future wrong. Here’s my two cents. The integration of AI into psychometrics is not new. For example, we use machine learning to detect cheating and to identify enemy items. The current emphasis on AI is largely driven by the development and implementation of Large Language Models (LLMs). LLMs have proven capable of managing a wide range of tasks successfully, from text-based activities like translation, summarization, and text-generation, to coding, and even picture generation. Recently, Sora (an Open AI tool) generated short videos with high quality based on text prompts.
However, I tend to think of LLMs more as an intermediary between humans and machines, in the sense that they can better understand what humans ask for and then interact with us accordingly. For example, if you want to calculate a number, a calculator is going to be more reliable than an LLM. Consider how we used to interact with computers through disk operating system (DOS) commands — or how the use of graphical user interface (GUI)-based operating systems like Microsoft Windows simplified the way we use computers. Now, with LLMs, we’re moving to an era where we can use natural language to instruct a computer or software to perform complex tasks, making technology more accessible and intuitive. Because of this capability, I’d imagine that simulation-based tests would become a more practical part of our assessment toolkits, opening up new possibilities for measuring skills and abilities in ways that were previously difficult to implement.