Showing posts with label Cognitive Science. Show all posts

Describe the structure of the information-processing system

In the store model of the human information-processing system, information from the environment that we acquire through our senses enters the system through the sensory register.
 The store model: A model of information processing in which information is depicted as moving through a series of processing units — sensory register, short-term memory, long-term memory — in each of which it may be stored, either fleetingly or permanently.
 Sensory register: the mental processing unit that receives information from the environment and stores it momentarily.
 Short-term memory: the mental processing unit in which information may be stored temporarily; the work space of the mind, where a decision must be made to discard information or to transfer it to permanent storage, in long-term memory.
 Long-term memory: the encyclopedic mental processing unit in which information may be stored permanently and from which it may be later retrieved.


Learn more »

What do you mean, sensors/percepts and effectors/actions?

For Humans
– Sensors: Eyes (vision), ears (hearing), skin (touch), tongue (gestation), nose (olfaction), neuromuscular system (proprioception)
– Percepts:
• At the lowest level – electrical signals from these sensors
• After preprocessing – objects in the visual field (location, textures, colors, …), auditory streams (pitch, loudness, direction), …
– Effectors: limbs, digits, eyes, tongue, …..
– Actions: lift a finger, turn left, walk, run, carry an object, …
The Point: percepts and actions need to be carefully defined, possibly at different levels of abstraction
A more specific example: Automated taxi driving system
• Percepts: Video, sonar, speedometer, odometer, engine sensors, keyboard input, microphone, GPS, …
• Actions: Steer, accelerate, brake, horn, speak/display, …
• Goals: Maintain safety, reach destination, maximize profits (fuel, tire wear), obey laws, provide passenger comfort, …
• Environment: Urban streets, freeways, traffic, pedestrians, weather, customers, …
Learn more »

Describe the rule-based approache to Knowledge Representation

Rule based approach:
Rule-based systems are used as a way to store and manipulate knowledge to interpret information in a useful way. In this approach, idea is to use production rules, sometimes called IF-THEN rules. The syntax structure is
IF <premise> THEN <action>
<premise> - is Boolean. The AND, and to a lesser degree OR and NOT, logical connectives are possible.
<action> - a series of statements
Notes:
• The rule premise can consist of a series of clauses and is sometimes referred to as the antecedent
• The actions are sometimes referred to as the consequent
A typical rule-based system has four basic components:
 A list of rules or rule base, which is a specific type of knowledge base.
 An inference engine or semantic reasoner, which infers information or takes action based on the interaction of input and the rule base.
 Temporary working memory.
 A user interface or other connection to the outside world through which input and output signals are received and sent.
Working Memory contains facts about the world and can be observed directly or derived from a rule. It contains temporary knowledge – knowledge about this problem-solving session. It may be modified by the rules.
It is traditionally stored as <object, attribute, value> triplet.
Rule Base contains rules; each rule is a step in a problem solving process. Rules are persistent knowledge about the domain. The rules are typically only modified from the outside of the system, e.g. by an expert on the domain.
The syntax isa IF <conditions> THEN <actions> format.
The conditions are matched to the working memory, and if they are fulfilled, the rule may be fired.
Actions can be:
 Adding fact(s) to the working memory.
 Removing fact(s) from the working memory
 Modifying fact(s) in the working memory.
The Interpreter operates on a cycle:
 Retrieval: Finds the rules that match the current Working Memory. These rules are the Conflict Set.
 Refinement: Prunes, reorders and resolves conflicts in the Conflict Set.
 Execution: Executes the actions of the rules in the Conflict Set. Applies the rule by performing the action.

Learn more »

What do you mean by Acting Humanly: The Turing Test Approach?

The Turing test, proposed by Alan Turing (1950) was designed to convince the people that whether a particular machine can think or not. He suggested a test based on indistinguishability from undeniably intelligent entities- human beings. The test involves an interrogator who interacts with one human and one machine. Within a given time the interrogator has to find out which of the two the human is, and which one the machine.
The computer passes the test if a human interrogator after posing some written questions, cannot tell whether the written response come from human or not.
To pass a Turing test, a computer must have following capabilities:
 Natural Language Processing: Must be able to communicate successfully in English
 Knowledge representation: To store what it knows and hears.
 Automated reasoning: Answer the Questions based on the stored information.
 Machine learning: Must be able to adapt in new circumstances.
Turing test avoid the physical interaction with human interrogator. Physical simulation of human beings is not necessary for testing the intelligence.
Learn more »

What is knowledge? What are the properties of knowledge representation systems?

Knowledge is a theoretical or practical understanding of a subject or a domain. Knowledge is also the sum of what is currently known.
Knowledge is ―the sum of what is known: the body of truth, information, and principles acquired by mankind.‖ Or, "Knowledge is what I know, Information is what we know."
There are many other definitions such as:
- Knowledge is "information combined with experience, context, interpretation, and reflection. It is a high-value form of information that is ready to apply to decisions and actions." (T. Davenport et al., 1998)
- Knowledge is ―human expertise stored in a person‘s mind, gained through experience, and interaction with the person‘s environment." (Sunasee and Sewery, 2002)
Knowledge consists of information that has been:
– interpreted,
– categorised,
– applied, experienced and revised.

Knowledge representation (KR) is the study of how knowledge about the world can be represented and what kinds of reasoning can be done with that knowledge. Knowledge Representation is the method used to encode knowledge in Intelligent Systems.
The following properties should be possessed by a knowledge representation system.
Representational Adequacy
- the ability to represent the required knowledge;
Inferential Adequacy
- the ability to manipulate the knowledge represented to produce new knowledge corresponding to that inferred from the original;
Inferential Efficiency
- the ability to direct the inferential mechanisms into the most productive directions by storing appropriate guides;
Acquisitional Efficiency
- the ability to acquire new knowledge using automatic methods wherever possible rather than reliance on human intervention.

Learn more »

What are intelligent agents? What are the properties of the intelligent agents?

An Intelligent Agent perceives it environment via sensors and acts rationally upon that environment with its effectors (actuators). Hence, an agent gets percepts one at a time, and maps this percept sequence to actions.
Properties of the agent
– Autonomous
– Interacts with other agents plus the environment
– Reactive to the environment
– Pro-active (goal- directed)

Learn more »

What are the Foundations of AI?

Philosophy:
Logic, reasoning, mind as a physical system, foundations of learning, language and rationality.
 Where does knowledge come from?
 How does knowledge lead to action?
 How does mental mind arise from physical brain?
 Can formal rules be used to draw valid conclusions?
Mathematics:
Formal representation and proof algorithms, computation, undecidability, intractability, probability.
 What are the formal rules to draw the valid conclusions?
 What can be computed?
 How do we reason with uncertain information?
Psychology:
Adaptation, phenomena of perception and motor control.
 How humans and animals think and act?
Economics:
Formal theory of rational decisions, game theory, operation research.
 How should we make decisions so as to maximize payoff?
 How should we do this when others may not go along?
 How should we do this when the payoff may be far in future?
Linguistics:
Knowledge representation, grammar
 How does language relate to thought?
Neuroscience:
Physical substrate for mental activities
 How do brains process information?
Control theory:
Homeostatic systems, stability, optimal agent design
 How can artifacts operate under their own control?
Learn more »

What is Artificial Intelligence?

Intelligence is:
– the ability to reason
– the ability to understand
– the ability to create
– the ability to Learn from experience
– the ability to plan and execute complex tasks

Artificial intelligence can be referred as the ability given to machine to perform tasks normally associated with human intelligence.
According to Barr and Feigenbaum:
―Artificial Intelligence is the part of computer science concerned with designing intelligence computer systems, that is, systems that exhibit the characteristics we associate with intelligence in human behavior.‖
Different definitions of AI are given by different books/writers. These definitions can be divided into two dimensions.
Top dimension is concerned with thought processes and reasoning, where as bottom dimension addresses the behavior.The definition on the left measures the success in terms of fidelity of human performance, whereas definitions on the right measure an ideal concept of intelligence, which is called rationality.
Human-centered approaches must be an empirical science, involving hypothesis and experimental confirmation. A rationalist approach involves a combination of mathematics and engineering.
Learn more »

Describe Turing response to Descartes

Mind, in Descartes's view, is special, central to human existence, basically reliable. The mind stands apart from and operates independently of the human body, a totally different sort of entity. The body is best thought of as an automaton, which can be compared to the machines made by men. It is divisible into parts, and elements could be removed without altering anything fundamental. But even if one could design an automaton as complex as a human body, that automaton can never resemble the human mind, for the mind is unified and not decomposable. Moreover, unlike a human mind, a bodily machine could never use speech or other signs in placing its thoughts before other individuals. An automaton might parrot information, but "it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do" (quoted in Wilson 1969, p. 13S).
Turing devised the test in the 1950’s, as a hypothetical test to determine when a machine had been imbued with sufficient intelligence to pass for human. In the test, a human judge is placed with two computer terminals, one connected to another human, and the other to a machine. The judge then converses with each terminal, and if he is unable to determine which terminal is connected to the machine, the machine is said to have attained similar intelligence to a human.
This test is often presented as a product of the 20th century. However, René Descartes’ Discourse on Method, written in 1637, contains the following passage, which bears a fair resemblance to the Turing Test:
If there were machines which had the organs and the external shape of a monkey or of some other animal without reason, we would have no way of recognizing that they were not exactly the same nature as the animals; whereas, if there was a machine shaped like our bodies which imitated our actions as much as is morally possible, we would always have two very certain ways of recognizing that they were not, for all their resemblance, true human beings.
The first of these is that they would never be able to use words or other signs to make words as we do to declare our thoughts to others. For one can easily imagine a machine made in such a way that it expresses words, even that it expresses some words relevant to some physical actions which bring about some change in its organs (for example, if one touches it in some spot, the machine asks what it is that one wants to say to it; if in another spot, it cries that one has hurt it, and things like that), but one cannot imagine a machine that arranges words in various ways to reply to the sense of everything said in its presence, as the most stupid human beings are capable of doing.
It seems that Descartes was able to conceive not only of a machine that might mimic a human in form, but also in action and speech, and he reasoned that the best way to differentiate this machine from a human being would be to engage it in conversation, and observe whether it conversed naturally, in the manner of a human being, or whether the conversation would be driven solely by rote and logic.
Thus the Turing Machine, a mathematical automaton model, developed by Alan Turing in 1940 was addressed by the Descartes in his Discourse on Method.
In “Computer Technology,” the section introducing Turing 1950, Stuart Shieber, well known to computational linguists for his research and to computer scientists, suggests that Turing played the role with respect to electronic computers that Descartes played with respect to mechanical devices, asking the same questions, only about different technology.
Learn more »

What is Marr’s Three Level of Information Processing?

In recent work in the theoretical foundations of cognitive science, it has become commonplace to separate three distinct levels of analysis of information-processing systems. David Marr (1982) has dubbed the three levels the computational, the algorithmic, and the implementational; Zenon Pylyshyn (1984) calls them the semantic, the syntactic, and the physical; and textbooks in cognitive psychology sometimes call them the levels of content, form, and medium (e.g. Glass, Holyoak, and Santa 1979).
David Marr presents his variant on the "three levels" story. His summary of "the three levels at which any machine carrying out an information-processing task must be understood":
Computational theory: What is the goal of the computation, why is it appropriate, and what is the logic of the strategy by which it can be carried out? What is a concept? What does it mean to learn a concept successfully?
Representation and algorithm: How can this computational theory be implemented? In particular, what is the representation for the input and output, and what is the algorithm for the transformation? How are objects and concepts represented? How much memory (space) and computation (time) does a learning algorithm require?
Hardware implementation: How can the representation and algorithm be realized physically?
As an illustration, Marr applies this distinction to the levels of theorizing about a well-understood device: a cash register.
At the computational level, "the level of what the device does and why", Marr tells us that "what it does is arithmetic, so our first task is to master the theory of addition".
But at the level of representation and algorithm, which specifies the forms of the representations and the algorithms defined over them, "we might choose Arabic numerals for the representations, and for the algorithm we could follow the usual rules about adding the least significant digits first and `carrying' if the sum exceeds 9".
And, at the implementational level, we face the question of how those symbols and processes are actually physically implemented; e.g., are the digits implemented as positions on a ten-notch metal wheel, or as binary coded decimal numbers implemented in the electrical states of digital logic circuitry?
Putting a closer look to Marr's, we might see the three perspectives of algorithm, content of computation, and implementation as having something like the following questions associated with them:
Format and algorithm: What is the syntactic structure of the representations at this level, and what algorithms are used to transform them? What is the real structure of the virtual machine? What's the program? From this perspective, the questions are explicitly information-processing questions. Further, it's this level of functional decomposition of the system which specifies the level of organization with which we are currently concerned, and to which the other two perspectives are related.
Content, function, and interpretation: What are the relational or global functional roles of the main processes described at this level? What tasks are being performed by these processes, and why? These are centrally questions about the interpretation and global function of the parts and procedures specified in our algorithmic analysis.
Implementation: How are the primitives of the current level implemented? By another computationally characterized virtual machine? Directly in the hardware? How much decomposition (in terms of kinds of primitives, structures, abilities, etc.) is there between the current level and what is implementing it? How much of the work is done by the postulated primitives of this level as opposed to being done explicitly by the analyzed processes? The shift from algorithm to implementation is thus centrally one of levels of organization or functional decomposition; i.e. of what happens when we try to move down a level of organization.
Learn more »

Describe Descartes Mind Body Problem (Theory of dualism):

Dualism: The term dualism is the state of being dual, or having a twofold division. Dualism doctrine consists of two basic opposing elements. Generally it consists of any system which is founded on a double principle. In philosophy of mind, dualism is a set of views about the relationship between mind and matter, which begins with the claim that mental phenomena are, in some respects, non-physical.
A generally well-known version of dualism is attributed to René Descartes (1641), which holds that the mind is a nonphysical substance. Descartes was the first to clearly identify the mind with consciousness and self-awareness and to distinguish this from the brain, which was the seat of intelligence. Hence, he was the first to formulate the mind-body problem in the form in which it exists today.
The mind-body problem can be stated as, "What is the basic relationship between the mental and the physical?" For the sake of simplicity, we can state the problem in terms of mental and physical events: "What is the basic relationship between mental events and physical events?" It could also be stated in terms of the relation between mental and physical states and/or processes, or between the brain and consciousness.
The mind-body problem is that of stating the exact relation between the mind and the body, or, more narrowly, between the mind and the brain. Most of the theories of the mind-body relation exist also as metaphysical theories of reality as a whole. While debates over the mind-body problem can seem intractable, science offers at least two promising lines of research. On the one hand, parts of the mind-body problem arise in research in artificial intelligence and might be solved by a better understanding of the relations between hardware and software.
The famous mind-body problem has its origins in Descartes’ conclusion that mind and body are really distinct. The crux of the difficulty lies in the claim that the respective natures of mind and body are completely different and, in some way, opposite from one another. On this account, the mind is an entirely immaterial thing without any extension in it whatsoever; and, conversely, the body is an entirely material thing without any thinking in it at all. This also means that each substance can have only its kind of modes. For instance, the mind can only have modes of understanding, will and, in some sense, sensation, while the body can only have modes of size, shape, motion, and quantity. But bodies cannot have modes of understanding or willing, since these are not ways of being extended; and minds cannot have modes of shape or motion, since these are not ways of thinking.
Descartes was aware that the positing of two distinct entities-a rational mind and a mechanical body-made implausible any explanation of their interaction. How can an immaterial entity control, interact with, or react to a mechanical substance? He made various stabs at solving this problem, none of them (as he knew) totally convincing. But in the process of trying to explain the interaction of mind and body, Descartes became in effect a physiologically oriented psychologist: he devised models of how mental states could exist in a world of sensory experience-models featuring physical objects that had to be perceived and handled.
The basic steps in Descartes argument
 Reject any idea that can be doubted.
 Our senses deceive us (dreams).
 Our senses limit our knowledge (the wax example).
o Knowledge is gained through the mind
 The only thing one cannot doubt is doubt itself.
 I doubt, therefore I think, therefore I am.
Descartes determined that the mind, an active reasoning entity, was the ultimate arbiter of truth. And he ultimately attributed ideas to innate rather than to experiential cause.
To further demonstrate the limitations of the senses, Descartes proceeds with what is known as the Wax Argument. He considers a piece of wax; his senses inform him that it has certain characteristics, such as shape, texture, size, color, smell, and so forth. When he brings the wax towards a flame, these characteristics change completely. However, it seems that it is still the same thing: it is still a piece of wax, even though the data of the senses inform him that all of its characteristics are different. Therefore, in order to properly grasp the nature of the wax, he cannot use the senses. He must use his mind. Descartes concludes: And so something which I thought I was seeing with my eyes is in fact grasped solely by the faculty of judgment which is in my mind.
Learn more »

What is the relation between cognitive science and other sciences?

Cognitive science tends to view the world outside the mind much as other sciences do. Thus it too has an objective, observer-independent existence. The field is usually seen as compatible with the physical sciences, and uses the scientific method as well as simulation or modeling, often comparing the output of models with aspects of human behavior. Still, there is much disagreement about the exact relationship between cognitive science and other fields, and the interdisciplinary nature of cognitive science is largely both unrealized and circumscribed.
Philosophy:
Philosophy is the investigation of fundamental questions about the nature of knowledge, reality, and morals. It is the study of general and fundamental problems concerning matters such as existence, knowledge, values, reason, mind, and language. Philosophy is distinguished from other ways of addressing these questions by its critical, generally systematic approach.

Philosophy interfaces with cognitive science in three distinct but related areas. First, there is the usual set of issues that fall under the heading of philosophy of science (explanation, reduction, etc.), applied to the special case of cognitive science. Second, there is the endeavor of taking results from cognitive science as bearing upon traditional philosophical questions about the mind, such as the nature of mental representation, consciousness, free will, perception, emotions, memory, etc. Third, there is what might be called theoretical cognitive science, which is the attempt to construct the foundational theoretical framework and tools needed to get a science of the physical basis of the mind off the ground -- a task which naturally has one foot in cognitive science and the other in philosophy.
Psychological sciences: Psychology
Psychology is the study of mental activity. It incorporates the investigation of human mind and behavior & goes back at least to Plato and Aristotle.
Psychology is the science that investigates mental states directly. It uses generally empirical methods to investigate concrete mental states like joy, fear or obsessions. Psychology investigates the laws that bind these mental states to each other or with inputs and outputs to the human organism.
Psychology is now part of cognitive science, the interdisciplinary study of mind and intelligence, which also embraces the fields of neuroscience, artificial intelligence, linguistics, anthropology, and philosophy.
Biological sciences: Neuroscience
Neuroscience is a field of study which deals with the structure, function, development, genetics, biochemistry, physiology, pharmacology and pathology of the nervous system. The study of behavior and learning is also a division of neuroscience.
In cognitive science, it is very important to recognize the importance of neuroscience in contributing to our knowledge of human cognition. Cognitive scientists must have, at the very least, a basic understanding of, and appreciation for, neuroscientific principles. In order to develop accurate models, the basic neurophysiological and neuroanatomical properties must be taken into account.
Socio-cultural sciences: Sociology
Sociology is the scientific or systematic study of human societies. It is a branch of social science that uses various methods of empirical investigation and critical analysis to develop and refine a body of knowledge about human social structure and activity.
Linguistics
Linguistics is another discipline that is arguably wholly subsumed by cognitive science. After all, language is often held to be the “mirror of the mind”- the (physical) means for one mind to communicate its thoughts to another.
Linguistics is the scientific study of natural language. The study of language processing in cognitive science is closely tied to the field of linguistics. Linguistics was traditionally studied as a part of the humanities, including studies of history, art and literature. In the last fifty years or so, more and more researchers have studied knowledge and use of language as a cognitive phenomenon, the main problems being how knowledge of language can be acquired and used, and what precisely it consists of. Some of the driving research questions in studying how the brain processes language include:
(1) To what extent is linguistic knowledge innate or learned?
(2) Why is it more difficult for adults to acquire a second-language than it is for infants to
acquire their first-language?
(3) How are humans able to understand novel sentences?
Computer Science: Artificial Intelligence
Artificial intelligence (AI) involves the study of cognitive phenomena in machines. One of the practical goals of AI is to implement aspects of human intelligence in computers. Textbooks define this field as “the study and design of intelligent agents”. Computers are also widely used as a tool with which to study cognitive phenomena. Computational modeling uses simulations to study how human intelligence may be structured.
Given the computational view of cognitive science, it is arguable that all research in artificial intelligence is also research in cognitive science.
Mathematics
In mathematics, the theory of computation developed by Turing (1936) and others provided a theoretical framework for describing how states and processes interposed between input and output might be organized so as to execute a wide range of tasks and solve a wide range of problems. The framework of McCulluch and Pitts (1943) attempted to show how neuron-like units acting as and- and or- gates, etc., could be arranged so as to carry out complex computations. And while evidence that real neurons behave in this way was not forthcoming, it at least provided some hope for physiological vindication of such theories.

Learn more »