Table Of ContentThree Years of Using Robots in the Artificial Intelligence Course – Lessons
Learned
Amruth N. Kumar
Ramapo College of New Jersey
505 Ramapo Valley Road
Mahwah, NJ 07430
amruth@ramapo.edu
Abstract
We have been using robots in our Artificial Intelligence course since fall 2000. We have been using
the robots for open-laboratory projects. The projects are designed to emphasize high-level
knowledge-based AI algorithms. After three offerings of the course, we paused to analyze the
collected data and see if we could answer the following questions: (i) Are robot projects effective at
helping students learn AI concepts? (ii) What advantages, if any, can be attributed to using robots
for AI projects? (iii) What are the downsides of using robots for traditional projects in AI? In this
paper, we will discuss the results of our evaluation and list the lessons learned.
1. Introduction
Working with robots is exciting. Many teachers and researchers have attempted to translate this
excitement into learning. In the last few years alone, numerous faculty have attempted to
incorporate robots into the undergraduate curriculum, and in various capacities: for non-majors, in a
survey course, across the Computer Science curriculum, for recruitment of women (e.g., [Haller and
Fossum 2001]), in Computer Science I (e.g., [Fagin 2000]) and in the Artificial Intelligence course
(e.g., [Kumar and Meeden 1998; Harlan et al 2001; Klassner 2002]).
We have been using robots in our Artificial Intelligence course [Kumar 2001] for assigning projects
in the course. Our Artificial Intelligence course is a junior/senior level course, taken by Computer
Science majors in a liberal arts undergraduate school. The course is traditional in its content: it
covers representation and reasoning, with emphasis on search, logic and expert systems. Our
objective in using robots was to reinforce the learning of these AI tools using an embodied agent.
This is similar to the approach initially used by [Harlan et al 2001], and that by [Greenwald and
Artz 2004] for soft computing topics. Other objectives for which robots have been used include: as
an organizing theme for the various AI concepts [Kumar and Meeden 1998], as an empirical test-
bed for philosophical issues in a graduate course [Turner et al, 1996], as a bridge between abstract
AI theory and implementation [Shamma and Turner 1998], and as an exploration of the
relationships between hardware, environment and software in agent design [Klassner 2002].
We chose the LEGO Mindstorms (http://www.legomindstorms.com) robot for the course because it
is truly "plug-and-play". Students do not have to design circuits, or even solder components to build
LEGO robots. Therefore, the projects can focus on AI algorithms rather than robot construction.
LEGO Mindstorms robot is also inexpensive. Therefore, we could ask students to buy their own kit,
either individually or in groups of 2 or 3 students.
1
In order to help students construct their robots, we recommended [Baum, 2002] as a reference. This
book describes several robots, including how to construct and program them. Students can easily
adapt these robots for their projects, and focus on building intelligent behaviors into them using AI
algorithms. Finally, we used Java and LeJos (http://lejos.sourceforge.net) with the robot. This
enabled us to utilize a larger part of the on-board main memory for programs, allowing our students
to write larger programs.
2. Our Goals and Objectives
The following principles guided how we used robots in our AI course:
• AI, not Electrical Engineering: We wanted to emphasize AI algorithms and not robot
construction. In other words, we wanted to minimize the time our students spent constructing
the robots, and maximize the time they spent implementing AI algorithms to test high-level
behavior of the robots. Constructing robots from components has greater pedagogical value to
engineering students than to Computer Science students. Some knowledge of engineering
principles is desirable for constructing robots. Constructing robots can be time-consuming, and
frustrating to the uninitiated. Since our students were Computer Science majors in a liberal arts
college, we decided to simplify robot construction by using a "plug-and-play" robot such as
LEGO MindStorms.
• AI, not Robotics: We wanted to use robot projects to teach AI, not robotics. We wanted to
emphasize knowledge-based AI algorithms and not reactive algorithms specific to robotics
[Brooks 1986] - we wanted to minimize the time students spent implementing low-level reactive
behavior in the robot, and maximize the time they spent building high-level knowledge-based
behavior traditionally considered “intelligent” in AI. For this reason, we stayed away from the
many interesting problems specific to robotics, such as localization, mapping, odometry,
landmarking, object detection, etc., that have been addressed by other practitioners, e.g., [Dodds
et al 2004; Mayer et al, 2004].
• Open, not Closed Labs: Finally, we wanted to use the robots in open laboratory projects [Tucker
et al 1991] – projects that students carry out on their own, after class. Open-labs have some
advantages over closed labs: students can spend as much time as they need to finish a project.
Not only is this often necessary to properly design and test robot behavior, it also encourages
students to be more creative in their design and implementation of robots. Traditionally, closed
lab courses are worth more credits than open-lab courses. So, using open labs in a course helped
us keep down the number of credits in the curriculum.
• Clearly defined, not open-ended: We chose to assign closely tailored (as opposed to open-
ended) projects in our course. It is especially interesting to assign open-ended robot projects in a
course and let students fully exercise their imagination. Such open-ended projects can be
combined with contests to foster a healthy atmosphere of learning while playing (e.g., [Sklar et
al 2002; Verner and Ahlgren 2004]). But, many robot behaviors can be implemented using
purely reactive algorithms just as well as using knowledge-based algorithms. Since our
emphasis was on using AI algorithms, we chose to closely specify the requirements of our
projects. This not only gives students a clear idea of what is expected of them, but also helps us
formulate a clear grading policy.
Our approach differs from other current approaches for using robots in the AI course along the lines
of these objectives.
2
2.1 Robots for Traditional Projects
Why use robots for traditional knowledge-based AI projects? This has many pedagogical benefits:
• Using robots promotes hands-on, active learning, which increases the depth of the student’s
knowledge [McConnell 1996];
• Students can use robots to test their implementation of AI algorithms in a “situated”
environment rather than an abstract symbolic environment, which is known to promote
learning [Brown et al, 1989];
• Students have tangible ways to assess the success or failure of their implementation. Since
robots are tactile, using them helps visual and kinesthetic learners, thereby addressing the
needs of more and different types of learners than a traditional AI course.
Finally, robots excite and motivate students.
We believe that using robots in the Artificial Intelligence course also helps students better
understand topics such as:
• Algorithm Analysis: Students get a better feel for the time and space complexities of
algorithms, and an appreciation of algorithm complexity issues.
• Real-time programming: A topic not well represented in typical undergraduate liberal arts
Computer Science curricula is real-time programming (non-determinism, concurrency, promptly
responding to changes in the environment). Using robots addresses this issue, briefly, but
effectively.
• Group Projects: [Maxwell and Meeden, 2000] state that what students learned the most from
their robotics course was about team work. Assigning robot projects as group projects is a great
way to offer additional opportunities for collaborative learning in the curriculum.
Some questions worth considering in this context are: (i) Are robot projects effective at helping
students learn traditional AI concepts? (ii) Are there downsides to using robots for traditional
projects in AI? (iii) What other advantages, if any, can be attributed to using robots for AI projects?
We will attempt to answer these questions.
3. Typical Projects
The topics that we identified as candidates for robot projects in the introductory Artificial
Intelligence course are:
• Blind searches – depth-first search and breadth-first search;
• Informed searches – hill climbing, best-first search and A* search;
• Expert systems - forward chaining and backward chaining;
• Game playing - Minimax search and alpha-beta cutoffs.
For each topic, we designed a project that engaged the robot in a meaningful task from the real
world. Clearly, all these projects could be implemented just as easily, as a purely symbolic solution.
But, robot-based solutions are more natural and intuitively appealing than symbolic solutions for
these problems.
Blind searches: The robots had to use blind search algorithms to either traverse a two-dimensional
tree (See Figure 1), or a maze. Although these problems can also be solved purely reactively, we
required that students use knowledge-based algorithms in their robots.
3
Informed searches: The robots had to use informed search algorithms to either traverse a maze
(See Figure 2) whose layout is known, or to clean a grid of rooms while minimizing travel. A
successful robot not only reaches the end of the maze (which can be accomplished through reactive
behavior alone), but also explores the maze in the order dictated by the algorithm, foregoing parts of
the maze that are sub-optimal.
Expert Systems: The robots had to use forward and backward chaining to scan a pixel grid and
determine the character represented by the grid. We have used two versions of the pixel grid – one
where the robot traverses the grid (See Figure 3) and another where it scans the grid with an
overhanging scanner arm (See Figure 4). A successful robot not only correctly identifies the
character, but also traverses the pixel grid in the order dictated by the algorithm.
Game Playing: The robot had to play “mini-chess” with a human opponent. The chess board was 5
x 5, contained only pawns, and the robot learned of its opponent’s moves through keyboard input.
In each case, the students could practice on one version of the prop, but had to demonstrate their
robot on a different version not available to them till the day of submission. Human assists were
discouraged, but penalized only infrequently. The size of the props, e.g., the number of rooms in a
grid or maze, the number of nodes in a tree, and the number of characters displayable on a pixel
grid was limited by the main memory limitations of the robot. Students could either work
individually or in teams of two.
Figure 1: A Two-Dimensional Tree for Blind Searches
4
Figure 2: The Maze – for Blind and Informed Searches
Figure 3: Pixel Grid for Forward and Backward Chaining Expert Systems
Figure 4: Pixel Grid and Scanner Arm of a Robot
4. Evaluation of Robot Projects
We have offered our AI course with robot projects three times so far. In this section, we will discuss
the results of evaluating the robot projects each semester and compare them to draw conclusions
that span all three years.
4.1 Fall 2000
This was the first time we offered the AI course with robot projects. We conducted an anonymous
survey of our students at the end of the semester. In the survey, we asked them to compare robot
projects with traditional LISP projects, or projects in other courses they had taken with the same
5
instructor (all but 3 of the respondents had taken other courses with the same instructor). There
were 16 students in the class, all of whom responded to the survey.
Compared to projects in other courses, students rated the robot projects in AI as:
• hard, i.e., 3.85 on a Likert scale of 1 (very easy) to 5 (very hard).
• taking a lot more time, i.e., 4.56 on a scale of 1 (lot less time) to 5 (lot more time)
• more interesting, i.e., 4.18 on a scale of 1 (lot less interesting) to 5 (lot more interesting)
So, our initial hunch in using robots in the course was right: even though students spent a lot more
time doing robot projects, they also enjoyed the projects a lot more. Students agreed that the robot
projects:
• helped them learn/understand AI concepts better (2.06 on a scale of 1 (Strongly Agree) to 5
(Strongly Disagree))
• gave them an opportunity to apply/implement AI concepts that they had learned (1.93 on the
above scale)
They rated the various components of the projects on a scale of 1 (easy) to 3 (hard) as follows:
• the assigned problems tended to be hard: 2.4.
• putting together robot hardware was easy to moderate: 1.71
• writing software for the robot was moderate: 2.00
• getting the robot to work reliably was hard: 2.87
Clearly, building the robot was the easiest part of the project, which validated our choice of LEGO
robots for the course.
Students were nearly neutral on whether the grade they received on the projects accurately reflected
how much they had learned from the projects (2.61 on a Likert scale of 1 (Strongly Agree) to 5
(Strongly Disagree)) and on whether their grades were an accurate reflection of how much time
they had spent doing the projects (3.30 on the same scale).
Students were unanimous in recommending that we continue to assign robot projects in future
offerings of the course. Over 90% said that they would recommend such a course to friends. Both of
these indicated that robot projects had captured the imagination of the students, and were therefore
effective.
4.2 Fall 2001
We used similar projects in fall 2001 as in fall 2000. We evaluated the projects using an anonymous
survey at the end of the semester. 9 out of the 11 students in the class responded.
Compared to projects in other courses, students rated the robot projects in AI as:
• hard, i.e., 4.33 on a Likert scale of 1 (very easy) to 5 (very hard).
• taking a lot more time, i.e., 4.78 on a scale of 1 (lot less time) to 5 (lot more time)
• more interesting, i.e., 3.44 on a scale of 1 (lot less interesting) to 5 (lot more interesting)
They rated the various components of the projects on a scale of 1 (very easy) to 5 (very hard) as
follows:
• neutral about the assigned problems: 3.22
6
• putting together robot hardware was easy: 2.44
• writing software for the robot was between neutral and hard: 3.56
• getting the robot to work reliably was very hard: 4.78
Students were neutral on whether the grade they received on the projects accurately reflected how
much effort they had put into the projects (3.43 on a Likert scale of 1 (Strongly Agree) to 5
(Strongly Disagree)). They were neutral on whether their grades were an accurate reflection of how
much they had learned from the projects (3.25 on the same scale).
These figures are interesting in how consistent they are with the figures from fall 2000, as shown in
Table 1. In order to facilitate comparison, we have scaled the Section B scores of fall 2000 from a
three-point scale to a five-point scale. N refers to the number of students who evaluated the projects.
Table 1: Comparison of evaluation results from fall 2000 and fall 2001
Criterion Fall 2000 Fall 2001
N=16 N=9
A. Robot projects compared to traditional projects:
The ease of robot projects: 3.85 4.33
Scale: 1 (very easy) (cid:224) 5 (very hard)
Time taken by robot projects: 4.56 4.78
Scale 1 (lot less time) (cid:224) 5 (lot more time)
How interesting robot projects were: 4.18 3.44
Scale 1 (lot less) (cid:224) 5 (lot more)
B. Student rating of the components of the robot projects:
Scale: 1 (very easy) (cid:224) 5 (very hard)
The assigned problems 4.00 3.22
Assembling robot hardware 2.85 2.44
Writing software for the robot 3.33 3.56
Getting the robot to work reliably 4.78 4.78
C. Whether project grades reflected:
Scale: 1 (very easy) (cid:224) 5 (very hard)
The effort put in by students on the project 3.30 3.43
How much students had learned from doing the project 2.61 3.25
Even though students consistently rated robot projects as being harder and a lot more time-
consuming than traditional projects, they also rated robot projects as being a lot more interesting
than traditional projects, highlighting one advantage of using robots for traditional projects in AI –
that they are more engaging, and therefore, more effective. Students were consistent in thinking that
getting the robots to work reliably was the hardest part of a robot project. They were also
consistently neutral about whether project grades reflected the effort they put into the projects. That
they reported spending a lot more time on robot projects may have tempered their opinion on this
issue.
4.3 Fall 2003
This was the third time we offered the Artificial Intelligence course with robot projects. Instead of a
single end-of-the-semester evaluation of the use of robots in the course, we evaluated every project
7
individually. We believed that evaluating each project individually, and as soon as it is completed
would provide a more accurate picture than an end-of-semester evaluation of all the projects
together. Previous evaluations had shown that getting the robot to work reliably was the hardest part
of any robot project. Therefore, in fall 2003, we relaxed the project requirements in several ways:
(i) the environment/props were more flexible; (ii) students could assist their robot without being
penalized as long as the robot announced the correct state. The state announcements of a robot were
now used to judge its success/failure. (These changes are described in the next Section “Lessons
Learned”.)
We wanted to assess the impact of using robots on students' knowledge of analysis of algorithms.
So, we drafted a test consisting of 10 multiple choice questions. We administered the test both at the
beginning of the semester, and again at the end of the semester. Students did not have access to the
test in the interim. The scores of 5 students increased from pretest to post-test; the scores of 3
students stayed the same, and the scores of 2 students decreased. We discarded the scores of
students who took one test and not the other. We cannot draw any definitive conclusions because of
the small sample size, but the numbers are encouraging.
On our evaluation of the first project, 11 students responded (class size was 12). Respondents rated
the various aspects of the project as follows, on a scale of 1 (very easy) to 5 (very hard):
• Neutral about building the robot: 2.82
• Neutral about writing the program: 2.9
• Getting the robot to work reliably was hard: 4.18
Students rated the components of the projects as follows on a scale of 1 (Strongly agree) to 5
(Strongly disagree):
• For Depth-first search:
o helped them understand the algorithm: 2.55
o helped them learn how to implement the algorithm: 2.45
o helped them apply the algorithm to a problem 2.0
• For Hill-Climbing:
o helped them understand the algorithm: 2.18
o helped them learn how to implement the algorithm: 2.55
o helped them apply the algorithm to a problem: 2.27
On a concept quiz, students who did not attempt the project did poorly as compared to those who
did.
On our evaluation of the second project, 12 students responded. Respondents rated the various
aspects of the project as follows on a scale of 1 (very easy) to 5 (very hard):
• Building the robot was easy: 2.08
• Neutral about writing the program: 2.92
• Getting the robot to work reliably was neutral to hard: 3.58
Students rated the components of the project as follows on a scale of 1 (Strongly agree) to 5
(Strongly disagree):
• For Best-first search:
o helped them understand the algorithm: 2.17
o helped them understand how to implement the algorithm: 2.08
o helped them apply the algorithm to a problem: 2.08
8
• For A* search:
o helped them understand the algorithm: 2.25
o helped them understand how to implement the algorithm: 2.25
o helped them apply the algorithm to a problem: 2.25
On a concept quiz, students who did not attempt the project did poorly as compared to those who
did.
On our evaluation of the third project, 10 students responded. Respondents rated the various aspects
of the project as follows on a scale of 1 (very easy) to 5 (very hard):
• Building the robot: 3.3. This is to be expected since students had to build a scanner arm,
which had an overhang and used rack and pinion gears (See Figure 4).
• Writing the program: 2.9
• Getting the robot to work reliably: 4.0
Students rated the components of the project as follows on a scale of 1 (Strongly agree) to 5
(Strongly disagree):
• For forward chaining:
o helped them understand the algorithm: 2.2
o helped them understand how to implement the algorithm: 2.2
o helped them apply the algorithm to a problem: 2.2
• For backward chaining:
o helped them understand the algorithm: 2.1
o helped them understand how to implement the algorithm: 2.1
o helped them apply the algorithm to a problem: 2.2
On the fourth project, since there were only 3 respondents, we did not analyze the results. Table 2
summarizes the student responses for the first three projects. N refers to the number of students who
evaluated the projects.
Table 2: Comparison of evaluation results from fall 2003
Criterion Project 1 Project 2 Project 3
N=11 N=12 N=10
Rating the components of the project:
Scale: 1 (very easy) to 5 (very hard)
Building the robot 2.82 2.08 3.30
Writing the program 2.90 2.92 2.90
Getting the robot to work reliably 4.18 3.58 4.00
For the first algorithm, the project helped:
Scale: 1 (Strongly agree) to 5 (Strongly disagree)
Understand the algorithm 2.55 2.17 2.20
Understand how to implement the algorithm 2.45 2.08 2.20
Apply the algorithm to a problem 2.00 2.08 2.20
For the second algorithm, the project helped:
Scale: 1 (Strongly agree) to 5 (Strongly disagree)
Understand the algorithm 2.18 2.25 2.10
Understand how to implement the algorithm 2.55 2.25 2.10
Apply the algorithm to a problem 2.27 2.25 2.20
9
Students were again consistent in noting that getting the robot to work reliably was the hardest part
of a robot project. For all six algorithms, students agreed that the robot project helped them
understand the algorithm, how to implement it, and how to apply it to a problem. Clearly, students
believe that robot projects help them learn the underlying AI concepts. The results were most
consistent, and hence, most definitive on that robot projects help students learn how to apply the
algorithm to a problem. We believe that this is one of the advantages of using robots for traditional
projects in AI – in the traditional symbolic world, concepts such as state and operators are exact,
whereas in the robot world, they are subject to approximation and interpretation. This forces
students to focus on the boundary between real-life problems and their AI solutions, and helps them
better develop their skills of operationalizing AI algorithms, i.e., applying them to solve problems.
We are pleased that this is borne out by the results of the evaluation.
We analyzed the results of the midterm and final exam to see if there was a correlation between
project completion and grades. We considered anyone who scored at least 60% on a project as
having completed the project. Table 3 summarizes the results. For instance, the first project was on
depth-first search and hill climbing. On the midterm exam, 40% of the first question was on depth-
first search. The sole student who did not attempt the first project scored 3 (out of 10) points on this
question. The scores of the rest of the students ranged from 7 through 10, with an average of 8.4.
N/A in the table indicates that the student did not attempt the question - students were asked to
answer 6 out of 7 questions on the midterm exam and the final exam. It is clear from the table that
there is a positive correlation between project completion and student scores on relevant sections of
the tests.
Table 3: Comparison of the test scores of the students who completed the projects,
versus those who did not.
Problem Topic & Points Scores (out of 10) of students who
attempted the project Did not attempt project
Project 1: Depth-first search and Hill Climbing
Midterm 4 points on Range: 7 (cid:224) 10 (11 scores) 3
Problem 1 Depth-first search Average: 8.4
Midterm 6 points on Range: 3 (cid:224) 10 (11 scores) N/A
Problem 3 Hill climbing Average: 7.5
Project 2: Best-first search and A*
Midterm 8 points on Range: 6 (cid:224) 9 (9 scores) 4,5,9
Problem 5 A* search Average: 7.7
Project 3: Forward and backward chaining
Final Exam 6 points on forward Range: N/A (cid:224) 10 (10 scores) 4, N/A
Problem 5 / backward chaining Average: 5.7
4.4 Analysis Across the Semesters
We considered any student who scored at least 1 point on a project as having attempted the project,
and anyone who scored at least 60% of the points as having completed the project. Table 3 lists the
10
Description:Finally, we used Java and LeJos (http://lejos.sourceforge.net) with the robot Game playing - Minimax search and alpha-beta cutoffs beginning of the semester, and again at the end of the semester better develop their skills of operationalizing AI algorithms, i.e., applying . Apress Publishers.