I did a small experiment in one of my student sessions. “What comes to your mind when I say Apple?” I asked the participants.
Each student gave different answers. Some of them overlapped and some of them were distinct. Here is an example.
Student-1: red, round, fruit
Student-2: health, doctor, juicy
Student-3: iPad, iPhone, Mac
Student-4: Newton, Tree, Gravity
Each student had a different perception. But all of them benefited from the perceptions of others.
This gave us the idea for an open assessment app.
- The assessment app provides a list of topics and subtopics.
- Once the user picks a topic, they are presented with a concept term.
- The participant responds with one or more response terms, anything that comes to their mind. This is not a multiple choice question. It is a way to get a broader understanding of what the student knows about the topic.
We collect all the answers and create a tag cloud for each concept term. A learner, after taking a self-assessment, can look at these tag clouds and learn more.
So far, so good. But how do you scale this for 100s or 1000s of students? How do you filter out noisy input, like a student typing “I don’t know”?
This is where AI comes in. We will cover our AI experiments in a future post.
Do you want to give it a try and assess your knowledge in Python? Please register using https://app.buildskills.in/#/python/beginner