📜 ⬆️ ⬇️

Control of knowledge: robots against students



Hello, dear% username%. I again decided to share with you my little teaching experience in IT. In particular, the course DBMS in a joint project Mail.Ru Group and MSTU. Bauman - Technopark. I want to devote this article to the topic of knowledge control. I will tell about how we have changed and are changing the approach to this problem, what problems we have encountered and how we plan to solve them. Welcome under cat.

Institute, exams, session

I will begin my story with what was and what problems we wanted to solve:

  1. The main system of knowledge control was the exam. This is quite an electoral system: you can pull out the "happy" ticket, write off or get your finger in the sky. At the same time, some human factor remains. This is a teacher's disposition to you, his mood and so on. In fact, it is very difficult to come up with an independent knowledge assessment system with a human face. In addition, the exam in the Technopark dropped out for the period of the session in Baumanka, and the students, naturally, set priorities not in favor of the project.
  2. Border controls (RK). In order to somewhat improve the situation with a single point of control, we added intermediate ones. We wanted to test specific knowledge without value judgments on them, offering students a task, the correct answer to which would give N points. But, as it turned out, it takes too much time to check with 30 students for 18 tasks (total 540) to write requests, and there are three such RCs.
  3. Semester project. During the course, students in groups of 3 people were asked to develop a project of their choice given the fact that it should be a highly loaded database. And the trouble is again: it turned out to be very difficult to put forward quite meaningful and adequate requirements for a spherical project in a vacuum. It’s usual in groups to divide course items into people, that is, you do Java, you are a DBMS, and I am the front. Thus, students did not receive sufficient knowledge, focusing on one subject.

Assessing the existing situation, we decided to call for help robots.

Own zoo park with chess and poetess

We decided to automate all the processes of acceptance of midterm controls and the semester project. For RK, we have developed an automated system for testing knowledge. Students were asked to answer 10 questions in an hour, each RK had 3 attempts, and there were 3 in total:
  1. Writing complex SQL queries
  2. Building and Understanding Indexes
  3. NoSQL (MongoDB)

To test the knowledge in points 1 and 3, a simple and obvious solution was found rather quickly. Students are invited to a task in which to write a request. The result of the query is compared with the reference result proposed by the teacher. There are, of course, some difficulties: the tasks have to be made in sufficient detail with the indication of all the attributes in the final sample, sorting, limits.
')
But with the control of the indices we have problems. We did not manage to come up with a good system with input of answers, so we limited ourselves to tests with the choice of one of the answers.

The semester project was also decided to change. We decided that its development should be done individually, and it will be a single project for all. We chose the comments engine API as such a project, taking Disqus as an example. The project can be developed in any language, no frontend is required and, in general, nothing superfluous. In fact, a direct interface in the database.

The project was tested in two stages. In the middle of the semester, when all the lectures on the basics of SQL were completed, the project was functionally tested. All API elements had to work and give the correct answers. At the end of the semester, load testing was conducted. Until recently, we didn’t tell the students what to test and how to offer them a black box, and the idea in the style of “link to your project was placed on the main Mail.Ru”. We will imitate such a load, but you can try to predict it yourself. This decision was our mistake, but let's talk about errors a little lower.

Here it is worth mentioning how the assessment was made:
RK - 10 problems, the correct answer to each gives 2 points. Total for one interim control, you can get a maximum of 20 points for all three - 60.
Functional testing - 20 points, but the delay in surrender removed 2 points every week.
Stress Testing. According to the results of testing all projects, a distribution schedule was obtained, and according to it, 0-10 points were awarded.
And another 10 points could be reached on full-time communication with the teacher. On it we tried to find out who made the project himself, and who wrote off.

Less than 75 points is not satisfactory.
75-84 - satisfactory.
85 - 94 is good.
95 and above - great.

Many will say that the scale is very high, but read below, there will be a new scale. ;)

And talk? And other problems of education


The problem of cheating has become the main one we have encountered. At RK, we had to make sure that at the end students see only the number of correct answers, but they do not see in which tasks they have mistakes. A small remark. At the end of each semester, we meet with project managers and discuss the results of the semester. This year, they added students. Yes, yes, we are discussing the quality of the course with students (would you like a teacher at the university to listen to your attitude to his course?), And this gave us very important and interesting feedback.

I think, you already guessed, what is wrong in our fight with cheating? In pursuit of knowledge control, we lost an element of learning. Our task is not to classify people into groups according to residual knowledge, our task is to train. And RK, with the opportunity to practice and see their mistakes, can provide several more skills than a couple of hours of lectures.

This leads to the problem “to talk?”. Most students complained about it. In the new format there was not enough personal communication with the teacher, with elements of working on the bugs. Not all students are decisive enough to come up after the lecture and ask questions of interest.

The black box problem. I mentioned it a little bit above. We conducted load testing of the project at the end of the semester. Students learned about the results already after the fact, not being able to fix everything (well, and again: this does not teach anything after the fact, and they forgot to help / talk).

What will happen? How does the heart calm down?

In the new semester, we will try to add a little soul to our robots. On mid-term controls, there will now be no attempts, you can take them as many times as you want, the system will show errors and correct solutions, helping to learn. But at the same time, the maximum that can be obtained for online delivery is 5 points. Another 15 will get in face-to-face communication with the teacher, in which students will have to solve one of the online problems in the classroom. And the task of the teacher is to identify how knowledge is acquired, and to help students find their weak points and growth zones.

There will be no late fees in the semester project (the word is bad and they don’t work). Functional testing gives 10 points. The load testing system is provided in advance, and the project has specific requirements of N rps - 10 points, M rps - 8, and so on. And another 20 points for full-time defense.


Total maximum is 100 again, but:
Less than 70 points - not satisfactory.
70-84 - satisfactory.
85-99 is good.
100 is great.

An excellent score is only 100.

I have been teaching at Technopark for two years now. Throughout this time we are constantly trying to improve the course, make it more interesting and useful. We meet with students, between teachers of the flow and between teachers of all courses together, share observations and experiences. We draw conclusions, change the program and approaches to learning. I think it's great. And I am sure that we will do this in the future. ;)

Source: https://habr.com/ru/post/228995/


All Articles