As a logician, I am interested in the kinds of logic that can
be used to solve these puzzles. We are trying to isolate inference
rules that can be used for the different puzzles within this domain,
but we are mostly interested to see if there are any logics that
seem to apply across the different puzzles of this type.
As an educator, I have started the LEGUP
project. The project here is to build an interface that users
can use to solve grid logic puzzles through the application of
the inference rules as identified above. The hope is to eventually
use this interface in my Introduction
to Logic course as a more engaging environment to learn about
logic than merely manipulating P's and Q's. We are also creating
a tutor built into the interface that can help teach the students,
and I want to explore different tutoring strategies and their
effect on student learning.
As a cognitive scientist, I am interested in knowing how people
solve these kinds of puzzles. This question has several different
sub-questions: Are humans using the same logic rules that I had
identified as a logician above? What interface for the LEGUP
project will be most helpful for its users to use? How does the
use of an interface effect the task of solving the puzzle? Or,
in general, what role does the environment play in this task (think
of Sudoku solvers who often use annotations to help their reasoning)?
And finally, how do people come up with these strategies?
As an Artificial Intelligence engineer I am interested in building
different AI's to solve these puzzles. Some of these AI's can
be integrated into the LEGUP interface:
For example, the user could guess at a possible move, and leave
it to the AI to verify that that move is indeed correct. A 'straightforward'
special-purpose AI solver (of which there are already many out
there) would suffice for that purpose. However, I am especially
interested in AI's that are able to transfer their skills from
one puzzle to the next, and hopefully even to a newly defined
grid logic puzzle. The cognitive science research above could
certainly be used for the development of such an AI.
Finally, as a philosopher I hope to look at all these angles,
and gain some further unstanding about the mind, reasoning, cognition,
etc. For example, this project clearly relates to situated views
on the mind: so where, if anywhere, should we draw the boundary
of the mind? Or, as far as reasoning goes: what convinces us that
some newly conceived inference principle is valid? And how does
the human mind avoid the problems that AI seems to be running
into? And can 'toy domain' problems such as these be used to address
the typical problems that AI runs into (cross-domain transfer
problem, scaling problem, handholding problem, relevance problem,
etc)?