Abstract
In the near future, a pervasive computing environment can be expected based on the recent progresses and advances in computing and communication technologies. Next generation of mobile communication will include both popular infrastructured wireless networks and novel infrastructureless mobile ad-hoc networks (MANETs). A MANET is a collection of wireless nodes that can dynamically form a network to exchange information without using any pre-existing fixed network infrastructure. Wireless ad-hoc networks are increasingly being used in the battlefield, emergency search, rescue missions. The special features of MANET bring great opportunities together with severe challenges. This report describes the concept, features, status and fundamental problems of ad hoc networking. Special attention is paid on network layer routing strategy and intrusion detection in MANETs.
Introduction
Our future living environments are likely to be based upon information resources provided by the connections of various communication networks for users. New small devices like Personal Digital Assistants (PDAs), mobile phones, and laptops enhance the information processing and accessing capabilities with mobility. To cater the needs of such devices, modern technologies should adopt new model of pervasive computing including new architectures, standards, devices, services, tools and protocols.
Mobile computing is the one of the most important technology supporting pervasive computing. Advances in both hardware and software techniques have enabled the spread of mobile hosts and wireless networking to masses. Generally, there are two modes in which wireless mobile nodes can communicate:
Infrastructured: In this mode, all the communication among the mobile nodes goes through a base station. A Base station is also known as access point. These base stations are connected to the fixed infrastructure or wired networks.
Infrastructureless: This mode of communication is known as a mobile ad hoc network (MANET). A MANET is collection of wireless nodes that can dynamically form a network to exchange information without using any pre-existing fixed network infrastructure. This is a very important part of communication technology that supports truly pervasive computing, because in many contexts information exchange between mobile units cannot rely on any fixed network infrastructure, but on rapid configuration of a wireless connections on-the-fly. A typical example of this mode of communication is people sitting in the conference room and exchanging data among them without any fixed infrastructure.
Related Background
2.1 MANET Concept
A mobile ad hoc network is a collection of wireless nodes that can dynamically be set up anywhere and anytime without using any pre-existing network infrastructure. It is an autonomous system in which mobile hosts connected by wireless links are free to move randomly. In MANET, nodes act both as host and routers. The traffic types in ad hoc networks are quite different from those in an infrastructured wireless network, including:
1 Peer-to-Peer: Communication between two nodes, which are within one hop.
2 Remote-to-Remote: Communication between two nodes beyond a single hop but which maintain a stable route between them. This may be the result of several nodes staying within communication range of each other in a single area or possibly moving as a group. The traffic is similar to standard network traffic.
3 Dynamic Traffic: This occurs when nodes are dynamic and moving around. Routes must be reconstructed. This results in a poor connectivity and network activity in short bursts.
2.2 MANET Features
MANET has the following features:
1 Autonomous terminal: In MANET, each mobile host is autonomous node, which may function as both a host and a router. In other words, besides the basic processing ability as a host, the mobile nodes can also perform switching functions as a router. So usually endpoints and switches are indistinguishable in MANET.
2 Distributed operation: Since there is no background network for the central control of the network operations, the control and management of the network is distributed among the terminals. The nodes involved in a MANET should collaborate amongst themselves and each node acts as a relay as needed, to implement functions e.g. security and routing.
3 Multi-hop routing: Basic types of ad hoc routing algorithms can be single-hop and multi-hop. Single-hop MANET is simpler than multihop in terms of structure and implementation, with the cost of lesser functionality and applicability. When delivering data packets from a source to its destination out of the direct wireless transmission range, the packets should be forwarded via one or more intermediate nodes.
4 Dynamic network topology: Since the nodes are mobile, the network topology may change rapidly and unpredictably and the connectivity among the terminals may vary with time. MANET should adapt to the traffic and propagation conditions as well as the mobility patterns of the mobile network nodes. The mobile nodes in the network dynamically establish routing among themselves as they move about, forming their own network on the fly. Moreover, a user in the MANET may not only operate within the ad hoc network, but may require access to a public fixed network.
5 Fluctuating link capacity: The nature of high bit-error rates of wireless connection might be more profound in a MANET. One end-to-end path can be shared by several sessions. The channel over which the terminals communicate is subject to noise, fading, and interference, and has less bandwidth than a wired network. In some scenarios, the path between any pair of users can traverse multiple wireless links and the link themselves can be heterogeneous. One effect of the relatively low to moderate capacities is that congestion is typically the norm rather than the exception i.e. aggregate application demand will likely approach or exceed network capacity frequently.
6 Energy-constrained operation: Some or all of the nodes in a MANET may rely on batteries or other means for their energy. Such devices need optimized algorithms and mechanisms that implement the computing and communicating functions.
7 Limited physical security: MANETs are generally more prone to physical security threats than are fixed cable networks. The increased possibility of eavesdropping, spoofing and denial-of-service attacks should be carefully considered.
2.3. MANET Status
Ad hoc network is not a new concept. It was first deployed in military in 1970s. Commercial interest in such networks has grown recently due to the advancement in the wireless communication. A new working group for MANET has been formed within the Internet Engineering Task Force (IETF) to investigate and develop standards for Internet routing support for mobile, wireless IP autonomous segments and develop a framework for running IP based protocols in ad hoc networks.
The recent IEEE standard 802.11 has increased the research interest in the field. Many international conferences and workshops have been held by e.g. IEEE and ACM. For instance, MobiHoc (The ACM Symposium on Mobile Ad Hoc Networking & Computing) has been one of the most important conferences of ACM SIGMOBILE (Special Interest Group on Mobility of Systems, Users, Data and Computing). Research in the area of ad hoc networking is receiving more attention from academia, industry, and government. Since these networks pose many complex issues, there are many open problems for research and significant contributions.
3. Challenges In MANETs
The special features of MANET bring this technology great opportunity together with severe challenges. These include:
3.1 Routing in MANETs
The main function of the network layer is routing packets from the source machine to the destination machine. The routing algorithm is that part of the network layer software responsible for deciding which output line as incoming packet should be transmitted on. The algorithms that choose the routes and the data structures that they use are a major area of network layer design.
What makes routing algorithm a challenging task in the ad hoc network when there are lots of algorithms available for the wired network?
The reason is the changing topology of the ad hoc networks. All the rules of wired network i.e. fixed topologies, fixed and known neighbors are automatically becomes out of scope. With an ad hoc network, the topology may be changing all the time; so valid routes can change spontaneously without any warning.
The following are the desirable properties of MANET routing protocol:
1 Distributed operation: Since there is no central point like wired network, each and every node in the MANET performs routing.
2 Loop freedom: Though, if not incorporated in the routing protocol, the TTL value could be used to prevent packet from roaming in the network for arbitrarily long periods of time. But, still this property is desirable for efficient use of resources and better overall performance.
3 Demand-based operation: Instead of assuming an uniform traffic distribution within the network and maintaining routing information between all nodes at all times, routing algorithm should adapt to the traffic pattern on a demand or need basis. It should be done in such a way so that it could utilize mobile nodes energy and network bandwidth more efficiently at the cost of increased route discovery delay.
4 Proactive operation: This is the opposite of demand-based operation. It certain contexts, the additional latency demand-based operation are unacceptable. If bandwidth and energy resources permit, proactive operation is desirable in these contexts.
5 Security: Without some form of network-level or link layer security, a MANET routing protocol is vulnerable to many forms of attack. It may be relatively simple to snoop network traffic, manipulate packet headers and redirect routing messages, within a wireless network without appropriate security provisions.
6 “Sleep” period operation: As a result of energy conservation, or some other need to be inactive, nodes of a MANET may stop transmitting and/or receiving for arbitrary time periods. A routing protocol should be able to accommodate such sleep periods without overly adverse consequences.
7 Unidirectional link support: Bi-directional links are typically assumed in the design of routing algorithms, and many algorithms are incapable of functioning properly over unidirectional links. But, unidirectional links can and do occur in wireless networks.
Ad hoc On-demand Distance Vector (Example: Ad hoc On-demand Distance Vector RoutingAODV) Routing:
AODV is the routing algorithm specially designed for ad hoc networks. It is the distant relative of the Bellman-Ford distance vector algorithm but adapted to work in a mobile environment. It takes into account the limited bandwidth and low battery life of the mobile nodes found in the ad hoc environment. It provides loop-free routes. Another very important characteristic is that it is an on-demand algorithm, that is, it determines a route to some destination only when somebody wants to send a packet to that destination.
To find a path to the destination, the source broadcasts a route request packet. The neighbors in turn broadcast the packet to their neighbors till it reaches an intermediate node that has a recent route information about the destination or till it reaches the destination (Figure 2). A node discards a route request packet that it has already seen. The route request packet uses sequence numbers to ensure that the routes are loop free and to make sure that if the intermediate nodes reply to route requests, they reply with the latest information only.
When a node forwards a route request packet to its neighbors, it also records in its tables the node from which the first copy of the request came. This information is used to construct the reverse path for the route reply packet. AODV uses only symmetric links because the route reply packet follows the reverse path of route request packet. As the route reply packet traverses back to the source (Figure 3), the nodes along the path enter the forward route into their tables.
If the source moves then it can reinitiate route discovery to the destination. If one of the intermediate nodes move then the moved nodes neighbor realizes the link failure and sends a link failure notification to its upstream neighbors and so on till it reaches the source upon which the source can reinitiate route discovery if needed.
3.2 Intrusion detection
Intrusion detection has become very important within the realm of network security especially in the case wireless ad hoc networks. Intrusion detection is defined as the method to identify “any set of actions that attempt to compromise the integrity, confidentiality or availability of a resource”. It is the techniques that attempt to detect intrusion into a computer or network by observing the actions, security logs, or audit data.
Following are the some primary assumptions that has to be made when working on intrusion detection:
1 User and program activities are observable, that is the information regarding the usage of a system by a user or program must be recordable and analyzable.
2 Normal and intrusive behavior must have distinct characteristics.
In order to detect an intrusion attack, one needs to make use of a model of intrusion. That is, we need to know what an Intrusion Detection System (IDS) should look out for. There are two types of models employed in current IDS:
1 Anomaly Detection: The first model bases its detection upon the profile of a user’s normal behavior. It analyzes the user’s current session and compares them to the profile representing the user’s normal behavior. It raises alarm if significant deviation is found during the comparison of audit data and user’s profile. This type of detection system is well suited to detect known or previously not encountered attacks.
2 Misuse Detection: It bases its detection upon comparison of user’s session or commands with the rule base of techniques used by the attackers previously. This model looks for known attacks in the user’s behavior. A typical misuse detection system takes in audit data for analysis and compares the data to large database of known attack signatures.
Most of the IDS take either a network-based or host–based approach based on the source of audit data.
1 Network-based IDS (NIDS): An IDS is network based when it looks for these patterns in network traffic. It listens on the network and capture and examine individual packet flowing through a network. They are able to look at the payload within a packet, to see which particular host application is being accessed, and to raise alerts when attacker tries to exploit a bug in such code. NIDS are host independent but can also be a software package installed on dedicated workstation. Side effect of NIDS is that its active scanning can slow down the network considerable.
2 Host-based IDS (HIDS): A Host-based IDS is concerned with what is happening on each individual host. They are able to detect actions such as repeated failed access attempts or changes to critical system files. It normally operates by accessing log files or monitoring real-time system usage.
Why the existing IDS cannot be used in the ad hoc network?
The vast difference between the two networks makes it very difficult to apply intrusion detection techniques developed for a fixed wired network to an ad hoc wireless network. The most important difference is perhaps that the ad hoc network does not have a fixed infrastructure and today’s network-based IDSs, which rely on real-time traffic analysis, can no longer function well in this new environment. Therefore, at any one time, the only available audit trace will be limited to communication activities taking place within the radio range, and the intrusion detection algorithms must be made to work on this partial and localized information.
Further, there may not be a clear separation between normalcy and anomaly in wireless ad hoc network. A node that sends out false routing information could be the one that has been compromised, or merely the one that is temporarily out of sync due to physical movement. ID may find it increasing difficult to distinguish false alarms from real intrusion.
The following are the desirable features of Intrusion Detection System for MANET:
The IDS should not introduce a new weakness in the MANET. That is, the IDS itself should not make a node any weaker than it already is.
An IDS should run continuously and remain transparent to the system and users.
The IDS should use as little system resources as possible to detect and prevent intrusions. IDSs that require excessive communication among nodes or run complex algorithms are not desirable.
It must be fault-tolerant in the sense that it must be able to recover from system crashes, hopefully recover to the previous state, and resume the operations before the crash.
Apart from detecting and responding to intrusions, an IDS should also monitor itself and detect if it has been compromised by an attacker.
An IDS should have a proper response. In other words, an IDS should not only detect but also respond to detected intrusions, preferably without human intervention.
Accuracy of the IDS is another major factor in MANETs. Fewer false positives and false negatives are desired.
It should interoperate with other intrusion detection systems to collaboratively detect intrusions.
Distributed IDS:
In their pioneering work on intrusion detection in MANETs, Zhang and Lee describe a distributed and cooperative intrusion detection model where every node in the network participates in intrusion detection and response [3]. In this model, an IDS agent runs at each mobile node, and performs local data collection and local detection, whereas cooperative detection and global intrusion response can be triggered when a node reports an anomaly. It considers abnormal updated to routing tables.
The internals of an IDS agent are structured into six pieces, as shown in Figure. Each node does local intrusion detection independently, and neighboring nodes collaboratively work on a larger scale. Individual IDS agents placed on each and every node run independently and monitor local activities (including user, systems, and communication activities within the radio range), detect intrusions from local traces, and initiate responses. Neighboring IDS agents cooperatively participate in global intrusion detection actions when an anomaly is detected in local data or if there is inconclusive evidence. The data collection module gathers local audit traces and activity logs that are used by the local detection engine to detect local anomaly. Detection methods that need broader data sets or require collaborations among local IDS agents use the cooperative detection engine. Both the local and global response modules provide intrusion response actions. The local response module triggers actions local to this mobile node (e.g., an IDS agent alerting the local user), while the global one coordinates actions among neighboring nodes, such as the IDS agents in the network electing a remedial action. A secure communication module provides a high-confidence communication channel among IDS agents.
This IDS presents a distributed and cooperative intrusion detection architecture based on statistical anomaly detection techniques. This article was among the first that had such a detailed distributed design.
Conclusion
This report describes the background information of MANETs, which includes concepts, features and status. Thereafter, it covers the two main challenges of MANETs i.e. Routing and Intrusion detection in detail. Various issues concerning different aspects of ad hoc wireless networks are discussed. MANET is one of the most important and essential technologies in current times. MANETs can be exploited in a wide area of applications, from military, emergency rescue, law enforcement, commercial, to local and personal contexts. It has already gained critical mass among researchers in academia as well as in industry. Moreover, there is also a flurry of activity in the standards bodies in this area. Many routing protocols designed for ad hoc networks have been proposed as Internet Draft and RFC of IETF. However, MANET as a technology can only become successful and popular if the challenges related to routing and intrusion detection, as described in this report, are adequately addressed.
Sunday, May 18, 2008
Human Computer Interface
Abstract
Human-computer interaction (HCI) is the study of how people design, implement, and use interactive computer systems and how computers affect individuals, organizations, and society. This encompasses not only ease of use but also new interaction techniques for supporting user tasks, providing better access to information, and creating more powerful forms of communication. It involves input and output devices and the interaction techniques that use them; how information is presented and requested; how the computer’s actions are controlled and monitored; all forms of help, documentation, and training; the tools used to design, build, test, and evaluate user interfaces; and the processes that developers follow when creating Interfaces.
HCI in the large is an interdisciplinary area. It is emerging as a specialty concern within several disciplines, each with different emphases: computer science (application design and engineering of human interfaces), psychology (the application of theories of cognitive processes and the empirical analysis of user behavior), sociology and anthropology (interactions between technology, work, and organization), and industrial design (interactive products).
Introduction
Research in Human-Computer Interaction (HCI) has been spectacularly successful, and has fundamentally changed computing. Just one example is the ubiquitous graphical interface used by Microsoft Windows 95, which is based on the Macintosh, which is based on work at Xerox PARC, which in turn is based on early research at the Stanford Research Laboratory (now SRI) and at the Massachusetts Institute of Technology. Another example is that virtually all software written today employs user interface toolkits and interface builders, concepts that were developed first at universities. Even the spectacular growth of the World-Wide Web is a direct result of HCI research: applying hypertext technology to browsers allows one to traverse a link across the world with a click of the mouse. Interface improvements more than anything else has triggered this explosive growth. Furthermore, the research that will lead to the user interfaces for the computers of tomorrow is happening at universities and a few corporate research labs.
The most famous definition of “Human Computer Interaction” is
“Human-computer interaction is a discipline concerned with the design, evaluation and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them.”
Human Computer Interaction (HCI) program will play a leading role in the creation of tomorrow's exciting new user interface software and technology, by supporting the broad spectrum of fundamental research that will ultimately transform the human-computer interaction experience so the computer is no longer a distracting focus of attention but rather an invisible tool that empowers the individual user and facilitates natural and productive human-human collaboration.
computer
A computer system comprises various elements, each of which affects the user of the system.devices for interactive use, allowing text entry, drawing and selection from the screen:
Text entry: traditional keyboard, phone text entry, speech and handwriting principally the mouse, but also touch pad, stylus, and others
3D interaction devices
Output display devices for interactive use:
Different types of screen mostly using some form of bitmap display
Large displays and situated displays for shared and public use
Digital paper may be usable in the near future
Memory:
Short-term memory: RAM
Long-term memory: magnetic and optical disks
Capacity limitations related to document and video storage
Access methods as they limit or help the user
Processing:
The effects when systems run too slow or too fast, the myth of the infinitely fast machine
Limitations on processing speed
Networks and their impact on system performance
Instead of workstations, computers may be in the form of embedded computational machines, such as parts of spacecraft cockpits or microwave ovens. Because the techniques for designing these interfaces bear so much relationship to the techniques for designing workstations interfaces, they can be profitably treated together. But if we weaken the computational and interaction aspects more and treat the design of machines that are mechanical and passive, such as the design of a hammer, we are clearly on the margins, and generally the relationships between humans and hammers would not considered part of human-computer interaction. Such relationships clearly would be part of general human factors, which studies the human aspects of all designed devices, but not the mechanisms of these devices. Human-computer interaction, by contrast, studies both the mechanism side and the human side, but of a narrower class of devices.
Human
Humans are limited in their capacity to process information. This has important implications for design. Information is received and responses given via a number of input and output channels:
Visual channel
Auditory channel
Haptic channel
Movement
Information is stored in memory:
sensory memory
Short-term (working) memory
Long-term memory
Information is processed and applied:
Reasoning
Problem solving
Skill acquisition
Error
Emotion influences human capabilities.
Users share common capabilities but are individuals with differences, which should not be ignored.
If we allow the human to be a group of humans or an organization, we may consider interfaces for distributed systems, computer-aided communications between humans, or the nature of the work being cooperatively performed by means of the system. These are all generally regarded as important topics central within the sphere of human-computer interaction studies.
Interaction
The communication between the user and the system. Their interaction framework has four parts:
· User
· Input
· System
· Output
interaction models help us to understand what is going on in the interaction between user and system. They address the translations between what the user wants and what the system does.
Ergonomics looks at the physical characteristics of the interaction and how these influence its effectiveness.
The dialog between user and system is influenced by the style of the interface.
The interaction takes place within a social and organizational context that affects both user and system.
Human-computer interaction is concerned with the joint performance of tasks by humans and machines; the structure of communication between human and machine; human capabilities to use machines (including the learnability of interfaces); algorithms and programming of the interface itself; engineering concerns that arise in designing and building interfaces; the process of specification, design, and implementation of interfaces; and design trade-offs. Human-computer interaction thus has science, engineering, and design aspects.
Goals
The goals of HCI are to produce usable and safe system, as well as functional systems. In order to produce computer system with good usability, developer must attempt to:
Understand the factors that determines how people use technology
Develop tools and technique to enable building suitable system
Achieve efficient, effective and safe interaction
Put people first
Human-computer interaction arose as a field from intertwined roots in computer graphics, operating systems, human factors, ergonomics, industrial engineering, cognitive psychology, and the systems part of computer science. Computer graphics was born from the use of CRT and pen devices very early in the history of computers. This led to the development of several human-computer interaction techniques.
Work on operating systems, meanwhile, developed techniques for interfacing input/output devices, for tuning system response time to human interaction times, for multiprocessing, and for supporting windowing environments and animation. This trends of development has currently given rise to "user interface management systems" and "user interface toolkits".
Cognitive Science is generally described as the interdisciplinary study of the acquisition and use of knowledge by an information processing system. All the fields that are involved in Cognitive Science share an interest in the mind. The five primary fields include: Psychology, Linguistics, Computer Science, Neuroscience, and Philosophy. Additional disciplines of study, such as Anthropology and Social Psychology, are also participating in Cognitive Science as they, too, research and develop formal structures and processes to represent the complex human system as it receives, stores, retrieves, transforms, and transmits information.
Cognition
cognition is the processing of information from the world around us. It includes perception, attention, pattern matching, memory, language processing, decision making, and problem solving. Cognitive load is the amount of mental resources needed to perform a given task.
All user interfaces make cognitive demands on users. Users must master special rules of system use, learn new concepts, and retain information in short-term memory. They must create and refine a mental model of how the system works and how they should use it. Systems that use purely auditory interfaces further challenge human memory and attention because they present information serially and non-persistently.
Successful user interface designs must respect the limitations of human cognitive processing. If a design requires the user to hold too many items in short-term memory or to learn a complex set of commands too quickly, it will fail.
There are three cognitive challenges you should consider as your design progresses:
Conceptual complexity: How complex are the new concepts callers must learn? How well do new mental structures match concepts and procedures that users are already familiar with?
Memory load: How much information must callers hold in their short-term memory? How much new material (e.g., commands, procedures) must they learn?
Attention: Is it easy for the caller to attend to the most salient information? Will callers' attention be divided? If they are momentarily distracted (e.g., while driving), can they seamlessly continue their interaction with the system when they are ready?
Cognitive Frameworks
Cognition is the process by which we gain knowledge. The processes, which contribute to cognition, include:
Understanding
· Remembering
Reasoning
Attending
Being aware
Acquiring skills
Creating new ideas
A key aim of HCI is to understand how humans interact with computers, and to represent how knowledge is passed between the two.
The basis for this aspect of HCI is the science of cognitive psychology. The results of work of cognitive psychologists provide many lessons, which can be applied in the design of computer interfaces. These results are expressed in the form of cognitive frameworks. This section describes some of the important frameworks, which have been developed by psychologists.
Human Information Processing
HCI is fundamentally an information-processing task. The human information processing approach is based on the idea that human performance, from displayed information to a response, is a function of several processing stages. The nature of these stages, how they are arranged, and the factors that influence how quickly and accurately a particular stage operates, can be discovered through appropriate research methods.
Human information processing analyses are used in HCI in several ways.
Basic facts and theories about information-processing capabilities are taken into consideration when designing interfaces and tasks
Information-processing methods are used in HCI to conduct empirical studies evaluating the cognitive requirements of various tasks in which a human uses a computer
Computational models developed in HCI are intended to characterize the information processing of a user interacting with a computer, and to predict, or model, human performance with alternative interfaces.
The Multi-Store Model of Memory
model of memory formed of three ' buffers', which will store memories and control processes, which move information between the buffers. The three stores identified are:
Sensory information store
Short-term memory (more recently known as working memory)
Long-term memory
The Model Human Processor
An important concept from cognitive psychology is the model human processor (MHP). This describes the cognitive process that people go through between perception and action. It is important to the study of HCI because cognitive processing can have a significant effect on performance, including task completion time, number of errors, and ease of use. This model was based on the human information-processing model.
The model human processor consists of three interacting systems. Each has its own memory and processor.
Perceptual processor
· Outputs into audio storage
· Outputs into visual storage
Cognitive processor
· Outputs into working memory.
· Has access to:
o Working memory
o Long term memory
Motor processor
· Carries out actions
Distributed Cognition
Distributed cognition is a framework proposed by Hutchins (1991). Its basis is that to explain human behavior you have to look beyond the individual human and the individual task. The functional system is a collection of actors, technology, setting and the interrelations to one another. Examples of functional systems, which have been studied include:
Ship Navigation
Air Traffic Control
Computer Programming Teams
The technique is used to analyze coordination of components in the functional system. It looks at
Information and how it propagates through the system
How it transforms between the different representational states found in the functional system
User Interaction Design
Interaction design is about creating interventions in often complex situations using technology of many kinds including PC software, the web and physical devices
Design involves:
Achieving goals within constraints and trade-off between these
Understanding the raw materials: computer and human
Accepting limitations of humans and of design
The design process has several stages and is iterative and never complete.
Interaction starts with getting to know the users and their context:
o Finding out who they are and what they are like ...
o Talking to them, watching them
Scenarios are rich design stories, which can be used and reused throughout design:
They help us see what users will want to do
They give a step-by-step walkthrough of users' interactions: including what they see, do and are thinking
Users need to find their way around a system; this involves:
o Helping users know where they are, where they have been and what they can do next
o Creating overall structures that are easy to understand and fit the users' needs
o Designing comprehensible screens and control panels
Complexity of design means we don't get it right first time:
o So we need iteration and prototypes to try out and evaluate
o But iteration can get trapped in local maxima, designs that have no simple improvements, but are not good
o Theory and models can help give good start points
Usability
“It is a measure of the effectiveness, efficiency and satisfaction with which specified user can achieve specified goals in a particular environment”. It asks following:
Is effective to use
Is efficient to use
Is safe to use
Has good utility
Is easy to learn
Is easy to remember how to use
Issues in Design
Who are the users?
What do we mean by needs?
How do generate alternative design?
How do we choose between design?
User Centered Development
Data Collection
Data Analysis
User Modeling
Design
Prototyping
Evaluation
Data Collection Techniques includes surveys, user questionnaires, Statistical Analysis. It collects all the information about users who going to use this product and environment they are working.
In data analysis phase, it characterize the people who will use your system, it analyze the tasks that user has to perform to accomplish their goals, environment analysis that where this product will work.
(Re)Design
Identify needs/establish requirements
Evaluate
Build an interactive version
Final product
In User Modeling, a computational model for how people perform tasks and solve problems based on psychological principles. For example, GOMS is a family of techniques for modeling and representing knowledge necessary for a person to perform a task.in design phase, all the analysis are used to design a system. in interface design, it shows how this product present itself. And in interaction design, it tells how should this product works.Rapids prototypes are early and inexpensive ways to identify usability problems before committing lots of resources. In interaction design main concern here is with usability. So using prototypes we can better understand the needs of user.
In usability testing and evaluation phase, users perform a variety of tasks with a prototype (or other system) while observers record notes on what each user does and says. Typical tests are conducted with one user at a time or two users working together. Testing may include collecting data on the paths users take to do tasks, the errors they make, when and where they are confused or frustrated, how fast they do a task, whether they succeed in doing the task, and how satisfied they are with the experience. The goal of most usability testing is to uncover any problems that users may encounter so those problems can be fixed.
Interaction Styles
Interaction can be seen as a dialogue between the computer and the user. Some applications have very distinct styles of interaction.
We can identify some common styles
• Command line interface
• Menus
• Natural language
• question/answer and query dialogue
• Form-fills and spreadsheets
• WIMP
Command Line Interface
Way of expressing instructions to the computer directly. Can be function keys, Single characters, short abbreviations, whole words, or a combination
• Suitable for repetitive tasks
• Better for expert users than novices
• Offer direct access to system functionality
• Command names/abbreviations should be meaningful
Menus
Set of options displayed on the screen. Options visible so demand less recall - rely on recognition so names should be meaningful. Selected by using mouse, numeric or alphabetic keys. Often options hierarchically grouped: sensible grouping is needed.
Menu systems can be
• Purely text based, with options presented as numbered choices, or
• can have graphical component, with menu appearing in box and choices made either by typing initial letter, or moving around with arrow keys
Natural language
An attractive option: familiar speech recognition or typed natural language can be used
Problems:
• Vague
• Ambiguous
Form-Filling Interfaces
• Primarily for data entry or data retrieval.
• Screen like paper form.
• Data put in relevant place.
• Requires good design and obvious correction facilities.
WIMP Interface
Windows
Icon
Menus
Pointers
Windows
Areas of the screen that behave as if they were independent terminals
• can contain text or graphics
• can be moved or resized
• can overlap and obscure each other, or can be laid out next to one another
• Scrollbars allow the user to move the contents of the window up and down or from side to side
• Title bars describe the name of the window
Icons
Small picture or image, used to represent some object in the interface, often a window. Windows can be closed down to this small representation (iconised) allowing many windows to be accessible. Icons can be many and various - highly stylized or realistic representations.
Pointers
Important component, since WIMP style relies on pointing and selecting things such as icons and menu items.
• Usually achieved with mouse
• Joystick, trackball, cursor keys or keyboard shortcuts are also used
• Wide variety
Menus
Choice of operations or services that can be performed offered on the screen.
Required option selected with pointer
· Problem - menus can take up a lot of screen space
· Solution - use pull-down or pop-up menus
· Pull-down menus are dragged down from a single title at the top of the screen
· Pop-up menus appear when a particular region of the screen is clicked on
Some menus are pin-up menus - they stay on the screen until explicitly requested to go away. Another type is the fall-down menu - similar to the pull-down, but the bar doesn’t have to be explicitly selected.
• Also cascading menus - one menu selection opens another menu adjacent to it, and so on.
• Pie menus - menu options arranged in a circle.
Easier to select item (larger target area) and quicker Keyboard accelerators sometimes offered – key combinations that have same effect as selecting the menu item.
Interaction Devices
Different tasks, different types of data and different types of users all require different user interface devices. In most cases, interface devices are either input devices or output devices, though, for example, a touch screen combines both. In either case, the devices available provide the frameset for the interaction setting.
Interface devices correlate to the human senses
Nowadays, a device usually is designed either for input or for output
Input Devices
Most commonly, personal computers are equipped with text input and pointing devices. For text input, the QWERTY keyboard is the standard solution, but depending on the purpose of the system, more specialized input devices like special keyboards, scanner with character recognition, pen or even voice input may be the better choice.
At the same time, the mouse is not the only imaginable pointing device: Alternatives for similar but slightly different purposes include touch pad, trackball, joystick or even eye gaze.
Just for completeness, devices for 3D manipulation should be mentioned as well. Note that 3D manipulation is a matter of not only moving to a particular location, but also choosing a particular orientation. To determine pitch, yaw and roll, in addition to the location, requires six degrees of freedom, not only three
Output Devices
Output from a personal computer in most cases means output of visual data. Devices for "dynamic" visualization include the traditional cathode ray tube (CRT), liquid crystal display (LCD), or specialized devices like a pilot's head-up display. Printers are also a very important device for visual output, but they are substantially different from screens in that their output is static - it won't change over time except for the yellowing of paper...
In order to increase bandwidth for information reaching the user, it is an important goal to use more channels in addition to visual output. One commonly used supplement for visual information is sound, but its true potential is often not recognized. Audible feedback can make interaction substantially more comfortable for the user, providing unambiguous information about the system state and success or failure of interaction (e. g., a button press), without putting still more load onto the visual channel.
Future of Human Computer Interaction
credicting the future is notoriously difficult. Suppose 100 years ago someone suggested that every home in the United States would soon have a bell that anyone in the world could ring anytime, day or night. Would you have believed it? Nevertheless, the telephone caught on and has become a technology conspicuous only by its absence.
So we can’t say anything about future, where it will take us. It depends on both advancement in Computer industry and Psychology of human. We only know basics about human. If we can understand human more better then we can make better interaction designs.
New areas like AI and Virtual Reality are opening new doors for Human Computer Interaction. New interfacing devices like wearable clothes and etc. are the future of HCI.
Conclusion
The subject of Human Computer Interaction is very rich both in terms of the disciplines it draws from as well as opportunities for research. Discussed here was just a small subset of the topics contained within HCI. The study of user interface provides a double-sided approach to understanding how humans and machines interact. By studying existing interfaces (such as the graphical user interface or the command line interface), we gain an understanding of how the human mind processes information. We gain insight into how human memory deals with the information presented, as well as its limitations.
Alternatively, from studying how human physiology and psychology, we can design better interfaces for people to interact with computers. Work in this domain is only beginning (indeed the number of papers written on this topic has increased in the past few years), and there is much that we don’t yet know about the way the human mind works that would allow more perfect user interfaces to be built.
Human-computer interaction (HCI) is the study of how people design, implement, and use interactive computer systems and how computers affect individuals, organizations, and society. This encompasses not only ease of use but also new interaction techniques for supporting user tasks, providing better access to information, and creating more powerful forms of communication. It involves input and output devices and the interaction techniques that use them; how information is presented and requested; how the computer’s actions are controlled and monitored; all forms of help, documentation, and training; the tools used to design, build, test, and evaluate user interfaces; and the processes that developers follow when creating Interfaces.
HCI in the large is an interdisciplinary area. It is emerging as a specialty concern within several disciplines, each with different emphases: computer science (application design and engineering of human interfaces), psychology (the application of theories of cognitive processes and the empirical analysis of user behavior), sociology and anthropology (interactions between technology, work, and organization), and industrial design (interactive products).
Introduction
Research in Human-Computer Interaction (HCI) has been spectacularly successful, and has fundamentally changed computing. Just one example is the ubiquitous graphical interface used by Microsoft Windows 95, which is based on the Macintosh, which is based on work at Xerox PARC, which in turn is based on early research at the Stanford Research Laboratory (now SRI) and at the Massachusetts Institute of Technology. Another example is that virtually all software written today employs user interface toolkits and interface builders, concepts that were developed first at universities. Even the spectacular growth of the World-Wide Web is a direct result of HCI research: applying hypertext technology to browsers allows one to traverse a link across the world with a click of the mouse. Interface improvements more than anything else has triggered this explosive growth. Furthermore, the research that will lead to the user interfaces for the computers of tomorrow is happening at universities and a few corporate research labs.
The most famous definition of “Human Computer Interaction” is
“Human-computer interaction is a discipline concerned with the design, evaluation and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them.”
Human Computer Interaction (HCI) program will play a leading role in the creation of tomorrow's exciting new user interface software and technology, by supporting the broad spectrum of fundamental research that will ultimately transform the human-computer interaction experience so the computer is no longer a distracting focus of attention but rather an invisible tool that empowers the individual user and facilitates natural and productive human-human collaboration.
computer
A computer system comprises various elements, each of which affects the user of the system.devices for interactive use, allowing text entry, drawing and selection from the screen:
Text entry: traditional keyboard, phone text entry, speech and handwriting principally the mouse, but also touch pad, stylus, and others
3D interaction devices
Output display devices for interactive use:
Different types of screen mostly using some form of bitmap display
Large displays and situated displays for shared and public use
Digital paper may be usable in the near future
Memory:
Short-term memory: RAM
Long-term memory: magnetic and optical disks
Capacity limitations related to document and video storage
Access methods as they limit or help the user
Processing:
The effects when systems run too slow or too fast, the myth of the infinitely fast machine
Limitations on processing speed
Networks and their impact on system performance
Instead of workstations, computers may be in the form of embedded computational machines, such as parts of spacecraft cockpits or microwave ovens. Because the techniques for designing these interfaces bear so much relationship to the techniques for designing workstations interfaces, they can be profitably treated together. But if we weaken the computational and interaction aspects more and treat the design of machines that are mechanical and passive, such as the design of a hammer, we are clearly on the margins, and generally the relationships between humans and hammers would not considered part of human-computer interaction. Such relationships clearly would be part of general human factors, which studies the human aspects of all designed devices, but not the mechanisms of these devices. Human-computer interaction, by contrast, studies both the mechanism side and the human side, but of a narrower class of devices.
Human
Humans are limited in their capacity to process information. This has important implications for design. Information is received and responses given via a number of input and output channels:
Visual channel
Auditory channel
Haptic channel
Movement
Information is stored in memory:
sensory memory
Short-term (working) memory
Long-term memory
Information is processed and applied:
Reasoning
Problem solving
Skill acquisition
Error
Emotion influences human capabilities.
Users share common capabilities but are individuals with differences, which should not be ignored.
If we allow the human to be a group of humans or an organization, we may consider interfaces for distributed systems, computer-aided communications between humans, or the nature of the work being cooperatively performed by means of the system. These are all generally regarded as important topics central within the sphere of human-computer interaction studies.
Interaction
The communication between the user and the system. Their interaction framework has four parts:
· User
· Input
· System
· Output
interaction models help us to understand what is going on in the interaction between user and system. They address the translations between what the user wants and what the system does.
Ergonomics looks at the physical characteristics of the interaction and how these influence its effectiveness.
The dialog between user and system is influenced by the style of the interface.
The interaction takes place within a social and organizational context that affects both user and system.
Human-computer interaction is concerned with the joint performance of tasks by humans and machines; the structure of communication between human and machine; human capabilities to use machines (including the learnability of interfaces); algorithms and programming of the interface itself; engineering concerns that arise in designing and building interfaces; the process of specification, design, and implementation of interfaces; and design trade-offs. Human-computer interaction thus has science, engineering, and design aspects.
Goals
The goals of HCI are to produce usable and safe system, as well as functional systems. In order to produce computer system with good usability, developer must attempt to:
Understand the factors that determines how people use technology
Develop tools and technique to enable building suitable system
Achieve efficient, effective and safe interaction
Put people first
Human-computer interaction arose as a field from intertwined roots in computer graphics, operating systems, human factors, ergonomics, industrial engineering, cognitive psychology, and the systems part of computer science. Computer graphics was born from the use of CRT and pen devices very early in the history of computers. This led to the development of several human-computer interaction techniques.
Work on operating systems, meanwhile, developed techniques for interfacing input/output devices, for tuning system response time to human interaction times, for multiprocessing, and for supporting windowing environments and animation. This trends of development has currently given rise to "user interface management systems" and "user interface toolkits".
Cognitive Science is generally described as the interdisciplinary study of the acquisition and use of knowledge by an information processing system. All the fields that are involved in Cognitive Science share an interest in the mind. The five primary fields include: Psychology, Linguistics, Computer Science, Neuroscience, and Philosophy. Additional disciplines of study, such as Anthropology and Social Psychology, are also participating in Cognitive Science as they, too, research and develop formal structures and processes to represent the complex human system as it receives, stores, retrieves, transforms, and transmits information.
Cognition
cognition is the processing of information from the world around us. It includes perception, attention, pattern matching, memory, language processing, decision making, and problem solving. Cognitive load is the amount of mental resources needed to perform a given task.
All user interfaces make cognitive demands on users. Users must master special rules of system use, learn new concepts, and retain information in short-term memory. They must create and refine a mental model of how the system works and how they should use it. Systems that use purely auditory interfaces further challenge human memory and attention because they present information serially and non-persistently.
Successful user interface designs must respect the limitations of human cognitive processing. If a design requires the user to hold too many items in short-term memory or to learn a complex set of commands too quickly, it will fail.
There are three cognitive challenges you should consider as your design progresses:
Conceptual complexity: How complex are the new concepts callers must learn? How well do new mental structures match concepts and procedures that users are already familiar with?
Memory load: How much information must callers hold in their short-term memory? How much new material (e.g., commands, procedures) must they learn?
Attention: Is it easy for the caller to attend to the most salient information? Will callers' attention be divided? If they are momentarily distracted (e.g., while driving), can they seamlessly continue their interaction with the system when they are ready?
Cognitive Frameworks
Cognition is the process by which we gain knowledge. The processes, which contribute to cognition, include:
Understanding
· Remembering
Reasoning
Attending
Being aware
Acquiring skills
Creating new ideas
A key aim of HCI is to understand how humans interact with computers, and to represent how knowledge is passed between the two.
The basis for this aspect of HCI is the science of cognitive psychology. The results of work of cognitive psychologists provide many lessons, which can be applied in the design of computer interfaces. These results are expressed in the form of cognitive frameworks. This section describes some of the important frameworks, which have been developed by psychologists.
Human Information Processing
HCI is fundamentally an information-processing task. The human information processing approach is based on the idea that human performance, from displayed information to a response, is a function of several processing stages. The nature of these stages, how they are arranged, and the factors that influence how quickly and accurately a particular stage operates, can be discovered through appropriate research methods.
Human information processing analyses are used in HCI in several ways.
Basic facts and theories about information-processing capabilities are taken into consideration when designing interfaces and tasks
Information-processing methods are used in HCI to conduct empirical studies evaluating the cognitive requirements of various tasks in which a human uses a computer
Computational models developed in HCI are intended to characterize the information processing of a user interacting with a computer, and to predict, or model, human performance with alternative interfaces.
The Multi-Store Model of Memory
model of memory formed of three ' buffers', which will store memories and control processes, which move information between the buffers. The three stores identified are:
Sensory information store
Short-term memory (more recently known as working memory)
Long-term memory
The Model Human Processor
An important concept from cognitive psychology is the model human processor (MHP). This describes the cognitive process that people go through between perception and action. It is important to the study of HCI because cognitive processing can have a significant effect on performance, including task completion time, number of errors, and ease of use. This model was based on the human information-processing model.
The model human processor consists of three interacting systems. Each has its own memory and processor.
Perceptual processor
· Outputs into audio storage
· Outputs into visual storage
Cognitive processor
· Outputs into working memory.
· Has access to:
o Working memory
o Long term memory
Motor processor
· Carries out actions
Distributed Cognition
Distributed cognition is a framework proposed by Hutchins (1991). Its basis is that to explain human behavior you have to look beyond the individual human and the individual task. The functional system is a collection of actors, technology, setting and the interrelations to one another. Examples of functional systems, which have been studied include:
Ship Navigation
Air Traffic Control
Computer Programming Teams
The technique is used to analyze coordination of components in the functional system. It looks at
Information and how it propagates through the system
How it transforms between the different representational states found in the functional system
User Interaction Design
Interaction design is about creating interventions in often complex situations using technology of many kinds including PC software, the web and physical devices
Design involves:
Achieving goals within constraints and trade-off between these
Understanding the raw materials: computer and human
Accepting limitations of humans and of design
The design process has several stages and is iterative and never complete.
Interaction starts with getting to know the users and their context:
o Finding out who they are and what they are like ...
o Talking to them, watching them
Scenarios are rich design stories, which can be used and reused throughout design:
They help us see what users will want to do
They give a step-by-step walkthrough of users' interactions: including what they see, do and are thinking
Users need to find their way around a system; this involves:
o Helping users know where they are, where they have been and what they can do next
o Creating overall structures that are easy to understand and fit the users' needs
o Designing comprehensible screens and control panels
Complexity of design means we don't get it right first time:
o So we need iteration and prototypes to try out and evaluate
o But iteration can get trapped in local maxima, designs that have no simple improvements, but are not good
o Theory and models can help give good start points
Usability
“It is a measure of the effectiveness, efficiency and satisfaction with which specified user can achieve specified goals in a particular environment”. It asks following:
Is effective to use
Is efficient to use
Is safe to use
Has good utility
Is easy to learn
Is easy to remember how to use
Issues in Design
Who are the users?
What do we mean by needs?
How do generate alternative design?
How do we choose between design?
User Centered Development
Data Collection
Data Analysis
User Modeling
Design
Prototyping
Evaluation
Data Collection Techniques includes surveys, user questionnaires, Statistical Analysis. It collects all the information about users who going to use this product and environment they are working.
In data analysis phase, it characterize the people who will use your system, it analyze the tasks that user has to perform to accomplish their goals, environment analysis that where this product will work.
(Re)Design
Identify needs/establish requirements
Evaluate
Build an interactive version
Final product
In User Modeling, a computational model for how people perform tasks and solve problems based on psychological principles. For example, GOMS is a family of techniques for modeling and representing knowledge necessary for a person to perform a task.in design phase, all the analysis are used to design a system. in interface design, it shows how this product present itself. And in interaction design, it tells how should this product works.Rapids prototypes are early and inexpensive ways to identify usability problems before committing lots of resources. In interaction design main concern here is with usability. So using prototypes we can better understand the needs of user.
In usability testing and evaluation phase, users perform a variety of tasks with a prototype (or other system) while observers record notes on what each user does and says. Typical tests are conducted with one user at a time or two users working together. Testing may include collecting data on the paths users take to do tasks, the errors they make, when and where they are confused or frustrated, how fast they do a task, whether they succeed in doing the task, and how satisfied they are with the experience. The goal of most usability testing is to uncover any problems that users may encounter so those problems can be fixed.
Interaction Styles
Interaction can be seen as a dialogue between the computer and the user. Some applications have very distinct styles of interaction.
We can identify some common styles
• Command line interface
• Menus
• Natural language
• question/answer and query dialogue
• Form-fills and spreadsheets
• WIMP
Command Line Interface
Way of expressing instructions to the computer directly. Can be function keys, Single characters, short abbreviations, whole words, or a combination
• Suitable for repetitive tasks
• Better for expert users than novices
• Offer direct access to system functionality
• Command names/abbreviations should be meaningful
Menus
Set of options displayed on the screen. Options visible so demand less recall - rely on recognition so names should be meaningful. Selected by using mouse, numeric or alphabetic keys. Often options hierarchically grouped: sensible grouping is needed.
Menu systems can be
• Purely text based, with options presented as numbered choices, or
• can have graphical component, with menu appearing in box and choices made either by typing initial letter, or moving around with arrow keys
Natural language
An attractive option: familiar speech recognition or typed natural language can be used
Problems:
• Vague
• Ambiguous
Form-Filling Interfaces
• Primarily for data entry or data retrieval.
• Screen like paper form.
• Data put in relevant place.
• Requires good design and obvious correction facilities.
WIMP Interface
Windows
Icon
Menus
Pointers
Windows
Areas of the screen that behave as if they were independent terminals
• can contain text or graphics
• can be moved or resized
• can overlap and obscure each other, or can be laid out next to one another
• Scrollbars allow the user to move the contents of the window up and down or from side to side
• Title bars describe the name of the window
Icons
Small picture or image, used to represent some object in the interface, often a window. Windows can be closed down to this small representation (iconised) allowing many windows to be accessible. Icons can be many and various - highly stylized or realistic representations.
Pointers
Important component, since WIMP style relies on pointing and selecting things such as icons and menu items.
• Usually achieved with mouse
• Joystick, trackball, cursor keys or keyboard shortcuts are also used
• Wide variety
Menus
Choice of operations or services that can be performed offered on the screen.
Required option selected with pointer
· Problem - menus can take up a lot of screen space
· Solution - use pull-down or pop-up menus
· Pull-down menus are dragged down from a single title at the top of the screen
· Pop-up menus appear when a particular region of the screen is clicked on
Some menus are pin-up menus - they stay on the screen until explicitly requested to go away. Another type is the fall-down menu - similar to the pull-down, but the bar doesn’t have to be explicitly selected.
• Also cascading menus - one menu selection opens another menu adjacent to it, and so on.
• Pie menus - menu options arranged in a circle.
Easier to select item (larger target area) and quicker Keyboard accelerators sometimes offered – key combinations that have same effect as selecting the menu item.
Interaction Devices
Different tasks, different types of data and different types of users all require different user interface devices. In most cases, interface devices are either input devices or output devices, though, for example, a touch screen combines both. In either case, the devices available provide the frameset for the interaction setting.
Interface devices correlate to the human senses
Nowadays, a device usually is designed either for input or for output
Input Devices
Most commonly, personal computers are equipped with text input and pointing devices. For text input, the QWERTY keyboard is the standard solution, but depending on the purpose of the system, more specialized input devices like special keyboards, scanner with character recognition, pen or even voice input may be the better choice.
At the same time, the mouse is not the only imaginable pointing device: Alternatives for similar but slightly different purposes include touch pad, trackball, joystick or even eye gaze.
Just for completeness, devices for 3D manipulation should be mentioned as well. Note that 3D manipulation is a matter of not only moving to a particular location, but also choosing a particular orientation. To determine pitch, yaw and roll, in addition to the location, requires six degrees of freedom, not only three
Output Devices
Output from a personal computer in most cases means output of visual data. Devices for "dynamic" visualization include the traditional cathode ray tube (CRT), liquid crystal display (LCD), or specialized devices like a pilot's head-up display. Printers are also a very important device for visual output, but they are substantially different from screens in that their output is static - it won't change over time except for the yellowing of paper...
In order to increase bandwidth for information reaching the user, it is an important goal to use more channels in addition to visual output. One commonly used supplement for visual information is sound, but its true potential is often not recognized. Audible feedback can make interaction substantially more comfortable for the user, providing unambiguous information about the system state and success or failure of interaction (e. g., a button press), without putting still more load onto the visual channel.
Future of Human Computer Interaction
credicting the future is notoriously difficult. Suppose 100 years ago someone suggested that every home in the United States would soon have a bell that anyone in the world could ring anytime, day or night. Would you have believed it? Nevertheless, the telephone caught on and has become a technology conspicuous only by its absence.
So we can’t say anything about future, where it will take us. It depends on both advancement in Computer industry and Psychology of human. We only know basics about human. If we can understand human more better then we can make better interaction designs.
New areas like AI and Virtual Reality are opening new doors for Human Computer Interaction. New interfacing devices like wearable clothes and etc. are the future of HCI.
Conclusion
The subject of Human Computer Interaction is very rich both in terms of the disciplines it draws from as well as opportunities for research. Discussed here was just a small subset of the topics contained within HCI. The study of user interface provides a double-sided approach to understanding how humans and machines interact. By studying existing interfaces (such as the graphical user interface or the command line interface), we gain an understanding of how the human mind processes information. We gain insight into how human memory deals with the information presented, as well as its limitations.
Alternatively, from studying how human physiology and psychology, we can design better interfaces for people to interact with computers. Work in this domain is only beginning (indeed the number of papers written on this topic has increased in the past few years), and there is much that we don’t yet know about the way the human mind works that would allow more perfect user interfaces to be built.
BLUETOOTH
ABSTRACT:
Bluetooth wireless technology is a short-range communications technology intended to replace the cables connecting portable and/or fixed devices while maintaining high levels of security. The key features of Bluetooth technology are robustness, low power, and low cost. The Bluetooth specification defines a uniform structure for a wide range of devices to connect and communicate with each other.
Bluetooth technology has achieved global acceptance such that any Bluetooth enabled device, almost everywhere in the world, can connect to other Bluetooth enabled devices in proximity. Bluetooth enabled electronic devices connect and communicate wirelessly through short-range, ad hoc networks known as piconets. Each device can simultaneously communicate with up to seven other devices within a single piconet. Each device can also belong to several piconets simultaneously. Piconets are established dynamically and automatically as Bluetooth enabled devices enter and leave radio proximity.
A fundamental Bluetooth wireless technology strength is the ability to simultaneously handle both data and voice transmissions. This enables users to enjoy variety of innovative solutions such as a hands-free headset for voice calls, printing and fax capabilities, and synchronizing PDA, laptop, and mobile phone applications to name a few.
Introduction
The name Bluetooth is derived from the cognomen of a 10th century king, Harald Bluetooth King of Denmark and Norway from 935 and 936 respectively, to 940. He is known for his unification of previously warring tribes from Denmark (including Scania, present-day Sweden, where the Bluetooth technology was invented) and Norway. Bluetooth likewise was intended to unify different technologies like computers and mobile phones. The name may have been inspired less by the historical Harald than the loose interpretation of him in The Long Ships by Frans Gunnar Bengtsson, a Swedish best-selling Viking-inspired novel. The Bluetooth logo merges the Nordic runes analogous to the modern Latin H and B: (Haglaz) and (Berkanan). The logo is similar to an older logo for Beauknit Textiles, a division of Beauknit Corporation. That logo, using the connection of a reversed K and B for Beauknit, is wider and has rounded corners, but is otherwise the same.
Bluetooth wireless technology is a short-range communications technology intended to replace the cables connecting portable and/or fixed devices while maintaining high levels of security. The key features of Bluetooth technology are robustness, low power, and low cost. The Bluetooth specification defines a uniform structure for a wide range of devices to connect and communicate with each other.
Bluetooth technology has achieved global acceptance such that any Bluetooth enabled device, almost everywhere in the world, can connect to other Bluetooth enabled devices in proximity. Bluetooth enabled electronic devices connect and communicate wirelessly through short-range, ad hoc networks known as piconets. Each device can simultaneously communicate with up to seven other devices within a single piconet. Each device can also belong to several piconets simultaneously. Piconets are established dynamically and automatically as Bluetooth enabled devices enter and leave radio proximity.
A fundamental Bluetooth wireless technology strength is the ability to simultaneously handle both data and voice transmissions. This enables users to enjoy variety of innovative solutions such as a hands-free headset for voice calls, printing and fax capabilities, and synchronizing PDA, laptop, and mobile phone applications to name a few.
Specification Make-UpUnlike many other wireless standards, the Bluetooth wireless specification gives product developers both link layer and application layer definitions, which supports data and voice applications.
SpectrumBluetooth technology operates in the unlicensed industrial, scientific and medical (ISM) band at 2.4 to 2.485 GHz, using a spread spectrum, frequency hopping, full-duplex signal at a nominal rate of 1600 hops/sec. The 2.4 GHz ISM band is available and unlicensed in most countries.
InterferenceBluetooth technology’s adaptive frequency hopping (AFH) capability was designed to reduce interference between wireless technologies sharing the 2.4 GHz spectrum. AFH works within the spectrum to take advantage of the available frequency. This is done by detecting other devices in the spectrum and avoiding the frequencies they are using. This adaptive hopping allows for more efficient transmission within the spectrum, providing users with greater performance even if using other technologies along with Bluetooth technology. The signal hops among 79 frequencies at 1 MHz intervals to give a high degree of interference immunity.
RangeThe operating range depends on the device class:
· Class 3 radios – have a range of up to 1 meter or 3 feet
· Class 2 radios – most commonly found in mobile devices – have a range of 10 meters to 30 feet
· Class 1 radios – used primarily in industrial use cases – have a range of 100 meters or 300 feet
· PowerThe most commonly used radio is Class 2 and uses 2.5 mW of power. Bluetooth technology is designed to have very low power consumption. This is reinforced in the specification by allowing radios to be powered down when inactive.
Data Rate1 Mbps for Version 1.2; Up to 3 Mbps supported for Version 2.0 + EDR
Why Choose Bluetooth wireless technology?Bluetooth wireless technology is the simple choice for convenient, wire-free, short-range communication between devices. It is a globally available standard that wirelessly connects mobile phones, portable computers, cars, stereo headsets, MP3 players, and more. Thanks to the unique concept of “profiles,” Bluetooth enabled products do not need to install driver software. The technology is now available in its fourth version of the specification and continues to develop, building on its inherent strengths — small-form factor radio, low power, low cost, built-in security, robustness, ease-of-use, and ad hoc networking abilities. Bluetooth wireless technology is the leading and only proven short-range wireless technology on the market today shipping over five million units every week with an installed base of over 500 million units at the end of 2005.
Globally AvailableThe Bluetooth wireless technology specification is available free-of-charge to our member companies around the globe. Manufacturers from many industries are busy implementing the technology in their products to reduce the clutter of wires, make seamless connections, stream stereo audio, transfer data or carry voice communications. Bluetooth technology operates in the 2.4 GHz, one of the unlicensed industrial, scientific, medical (ISM) radio bands. As such, there is no cost for the use of Bluetooth technology. While you must subscribe to a cellular provider to use GSM or CDMA, with Bluetooth technology there is no cost associated with the use beyond the cost of your device.
Range of DevicesBluetooth technology is available in an unprecedented range of applications from mobile phones to automobiles to medical devices for use by consumers, industrial markets, enterprises, and more. The low power consumption, small size and low cost of the chipset solution enables Bluetooth technology to be used in the tiniest of devices. Have a look at the wide range products made available by our members in the Bluetooth product directory and the component product listing.
Ease of UseBluetooth technology is an ad hoc technology that requires no fixed infrastructure and is simple to install and set up. You don’t need wires to get connected. The process for a new user is easy – you get a Bluetooth branded product, check the profiles available and connect it to another Bluetooth device with the same profiles. The subsequent PIN code process is as easy as when you identify yourself at the ATM machine. When out-and-about, you carry your personal area network (PAN) with you and can even connect to others.
Globally Accepted SpecificationBluetooth wireless technology is the most widely supported, versatile, and secure wireless standard on the market today. The globally available qualification program tests member products as to their accordance with the standard. Since the first release of the Bluetooth specification in 1999, over 4000 companies have become members in the Bluetooth Special Interest Group (SIG). Meanwhile, the number of Bluetooth products on the market is multiplying rapidly. Volumes have doubled for the fourth consecutive year and are likely to reach an installed base of 500 million units by the close of 2005.
Secure ConnectionsFrom the start, Bluetooth technology was designed with security needs in mind. Since it is globally available in the open 2.4 GHz ISM band, robustness was built in from the beginning. With adaptive frequency hopping (AFH), the signal “hops” and thus limits interference from other signals. Further, Bluetooth technology has built-in security such as 128bit encryption and PIN code authentication. When Bluetooth products identify themselves, they use the PIN code the first time they connect. Once connected, always securely connected
Bluetooth Technology
The foundation of Bluetooth wireless technology is composed of the adopted specifications that are utilized and developed by the Bluetooth SIG’s global membership as guidelines for producing interoperable products. The Bluetooth specifications are categorized as the core specification, profiles, protocols, and transports.
The Bluetooth core specification describes the protocol stack up through the L2CAP layer and the characteristics of each of the relevant protocols, as well as the relationship between them.
Above the Bluetooth core specification are a set of protocols that have been defined for use by one or more profiles.
Profile specifications define a set of features required to support a particular usage model or set of usage models. A profile specification document describes how to use the protocol stack to implement a given profile.
Transport specifications define physical interfaces that can be used to implement the Host Controller Interface (HCI). The HCI transports are used in products that choose to separate the implementation of the Host and Controller functions.
Comparision with Other Technologies
The wireless world continues to grow as engineers develop faster, more robust technologies to free us from wires for greater simplicity, convenience, and efficiency. From short range to long range, the wireless landscape has taken shape in our lives. Bluetooth wireless technology, though one among many, has a wide variety of applications. A comparison of Bluetooth technology with other technologies is helpful then deciding which technologies to implement or products to purchase.
Bluetooth Wireless Technology
· Bluetooth wireless technology is geared towards voice and data applications
· Bluetooth wireless technology operates in the unlicensed 2.4 GHz spectrum
· Bluetooth wireless technology can operate over a distance of 10 meters or 100 meters depending on the Bluetooth device class. The peak data rate with EDR is 3 Mbps
· Bluetooth wireless technology is able to penetrate solid objects
· Bluetooth technology is omni-directional and does not require line-of-sight positioning of connected devices
· Security has always been and continues to be a priority in the development of the Bluetooth specification. The Bluetooth specification allows for three modes of security
· The cost of Bluetooth chips is under $3
Wi-Fi (IEEE 802.11)
Bluetooth technology costs a third of Wi-Fi to implement
· Bluetooth technology uses a fifth of the power of Wi-Fi
· The Wi-Fi Alliance tests and certifies 802.11 based wireless equipment
· 802.11a: This uses OFDM, operates in the 5 GHz range, and has a maximum data rate of 54 Mbps
· 802.11b: Operates in the 2.4 GHz range, has a maximum data rate of 11 Mbps and uses DSSS. 802.11b is the original Wi-Fi standard
· 802.11g: Operates in the 2.4 GHz range, uses OFDM and has a maximum data rate of 54 Mbps. This is backwards compatible with 802.11b
· 802.11e: This standard will improve quality of service
· 802.11h: This standard is a supplement to 802.11a in Europe and will provide spectrum and power control management. Under this standard, dynamic frequency selection (FS) and transmit power control (TPC) are added to the 802.11a specification
· 802.11i: This standard is for enhanced security. It includes the advanced encryption standard (AES). This standard is not completely backwards compatible and some users will have to upgrade their hardware. The full 802.11i support is also referred to as WPA2
· 802.11k: Under development, this amendment to the standard should allow for increased radio resource management on 802.11 networks
· 802.11n: This standard is expected to operate in the 5 GHz range and offer a maximum data rate of over 100 Mbps (though some proposals are seeking upwards of 500 Mbps). 802.11n will handle wireless multimedia applications better than the other 802.11 standards
· 802.11p: This standard will operate in the automotive-allocated 5.9 GHz spectrum. It will be the basis for the dedicated short range communications (DSRC) in North America. The DSRC will allow vehicle to vehicle and vehicle to roadside infrastructure communication
· 802.11r: This amendment to the standard will improve users’ ability to roam between access points or base stations. The task group developing this form in spring/summer 2004
· 802.11s: Under development, this amendment to the standard will allow for mesh networking on 802.11 networks. The task group developing this formed in spring/summer 2004
WiMAX (Worldwide Interoperability for Microwave Access and IEEE 802.16)
· WiMax is a wireless metropolitan area network (MAN) technology
· WiMax has a range of 50 km with data rates of 70 Mbps. Typical cell has a shorter range
· The original 802.16 standard operated in the 10-66 GHz frequency bands with line of sight environments
· The newly completed 802.16a standard operates between 2 and 11 GHz and does not need line of sight
· Delays in regulatory approval in Europe due to issues regarding the use of the spectrums in the 2.8 GHz and 3.4 GHz range
· Supports vehicle mobility for between 20 to 100+ km/hr. The 802.16e standard will allow nomadic portability
· The IEEE 802.16a and the ETSI HIPERMAN (High Performance Radio Metropolitan Area Network) share the same PHY and MAC. 802.16 has been designed from the beginning to be compatible with the European standard
· Created to compete with DSL and cable modem access, the technology is considered ideal for rural, hard to wire areas
WiBro (Wireless Broadband)
· Portable Internet Service (WiBro) is to provide a high data rate wireless internet access with PSS (Personal Subscriber Station) under the stationary or mobile environment, anytime and anywhere. Primarily based in South Korea based on TTA specifications.
· 2300-2400 MHz, TDD, OFDMA, channel bandwidth 10 MHz, etc.
· System shall support mobile users at a velocity of up to 60km/h
· Throughput (per user) Max. DL / UL = 3 / 1 [Mbps], Min. DL / UL = 512 / 128 [Kbps]
· Will come online Q1 2006
Infrared (IrDA)
· IrDA is used to provide wireless connectivity for devices that would normally use cables to connect. IrDA is a point-to-point, narrow angle (30° cone), ad-hoc data transmission standard designed to operate over a distance of 0 to 1 meter and at speeds of 9600 bps to 16 Mbps
· IrDA is not able to penetrate solid objects and has limited data exchange applications compared to other wireless technologies
· IrDA is mainly used in payment systems, in remote control scenarios or when synchronizing two PDAs with each other
Radio Frequency Identification (RFID)
· There are over 140 different ISO standards for RFID for a broad range of applications
· With RFID, a passive or unpowered tag can be powered at a distance by a reader device. The receiver, which must be within a few feet, pulls information off the ‘tag,’ and then looks up more information from a database. Alternatively, some tags are self-powered, ‘active’ tags that can be read from a greater distance
· RFID can operate in low frequency (less than 100 MHz), high frequency (more than 100 MHz), and UHF (868 to 954 MHz)
· Uses include tracking inventory both in shipment and on retail shelves
802.20
· Considered to be mobile wireless broadband wireless access.
· Maximum data rate expected to be 1 Mbps, operating in licensed bands below 3.5 GHz
· Supports vehicle mobility up to 250 km/hr
Security
Today's wireless world means that data is being sent, among us, invisibly from device to device, country to country, person to person. This data, in the form of e-mails, photos, contacts and addresses are precious and private to each of us. This private information, no longer making its way along wires in plain sight, needs to be sent securely to its intended recipient without interception. Wireless standards the world over are evolving and have various formats for dealing with the security issues of its users. Bluetooth wireless technology is no exception.Bluetooth wireless technology has, from its inception, put great emphasis on wireless security so that users of this global standard can feel secure while making their connections. The Bluetooth Special Interest Group (SIG), made up of over 4000 member manufacturers, has a Bluetooth security experts group made up of engineers from its member companies who provide critical security information and feedback that is taken into account as the Bluetooth wireless specification evolves.
Product developers that use Bluetooth wireless technology in their products have several options for implementing security. There are three modes of security for Bluetooth access between two devices. Security Mode 1: non-secure Security Mode 2: service level enforced security Security Mode 3: link level enforced securityThe manufacturer of each product determines these security modes. Devices and services also have different security levels. For devices, there are two levels: "trusted device" and "untrusted device." A trusted device, having been paired with one's other device, has unrestricted access to all services.With regard to services, three security levels are defined: services that require authorization and authentication, services that require authentication only and services that are open to all devices.Lately, confusion and misinformation surrounding security and Bluetooth wireless technology has increased. The current security issues typically involve mobile phones. How these issues apply to other classes of devices is important and is often not addressed. The encryption algorithm in the Bluetooth specifications is secure. This includes devices such as mice and keyboards connecting to a PC, a mobile phone synchronizing with a PC, and a PDA using a mobile phone as a modem to name just a few of the many use cases.Cases where data has been compromised on mobile phones are the result of implementation issues on that platform. The Bluetooth SIG diligently works with our members to investigate any issues that are reported to understand the root cause of the issue. If it is a specification issue, we work with the membership to get patches out and ensure future devices don't suffer from the same vulnerability. This is an on-going process. The recently reported issues of advanced "hackers" gaining access to information stored on select mobile phones using Bluetooth functionality are due to incorrect implementation. The names bluesnarfing and bluebugging have been given to these methods of illegal and improper access to information. The questions and answers below provide users with more information about these current issues and will address their concerns for dealing with these security risks.
List of applications
· Wireless control of and communication between a cell phone and a hands free headset or car kit. This was one of the earliest applications to become popular.
· Wireless networking between PCs in a confined space and where little bandwidth is required.
· Wireless communications with PC input and output devices, the most common being the mouse, keyboard and printer.
· Transfer of files between devices via OBEX.
· Transfer of contact details, calendar appointments, and reminders between devices via OBEX.
· Replacement of traditional wired serial communications in test equipment, GPS receivers and medical equipment.
· For controls where infrared was traditionally used.
· Sending small advertisements from Bluetooth enabled advertising hoardings to other, discoverable, Bluetooth devices.
· Wireless control of a games console – Nintendo's Wii and Sony's PlayStation 3 both use Bluetooth technology for their wireless controllers.
Hackers of Bluetooth
BluejackingBluejacking allows phone users to send business cards anonymously using Bluetooth wireless technology. Bluejacking does NOT involve the removal or alteration of any data from the device. These business cards often have a clever or flirtatious message rather than the typical name and phone number. Bluejackers often look for the receiving phone to ping or the user to react. They then send another, more personal message to that device. Once again, in order to carry out a bluejacking, the sending and receiving devices must be within 10 meters of one another. Phone owners who receive bluejack messages should refuse to add the contacts to their address book. Devices that are set in non-discoverable mode are not susceptible to bluejacking.
BluebuggingBluebugging allows skilled individuals to access the mobile phone commands using Bluetooth wireless technology without notifying or alerting the phone’s user. This vulnerability allows the hacker to initiate phone calls, send and receive text messages, read and write phonebook contacts, eavesdrop on phone conversations, and connect to the Internet. As with all the attacks, without specialized equipment, the hacker must be within a 10 meter range of the phone. This is a separate vulnerability from bluesnarfing and does not affect all of the same phones as bluesnarfing.
BluesnarfingBluesnarfing allows hackers to gain access to data stored on a Bluetooth enabled phone using Bluetooth wireless technology without alerting the phone’s user of the connection made to the device. The information that can be accessed in this manner includes the phonebook and associated images, calendar, and IMEI (international mobile equipment identity). By setting the device in non-discoverable, it becomes significantly more difficult to find and attack the device. Without specialized equipment the hacker must be within a 10 meter range of the device while running a device with specialized software. Only specific older Bluetooth enabled phones are susceptible to bluesnarfing.
Car WhispererThe car whisperer is a software tool developed by security researchers to connect to and send or receive audio to and from Bluetooth car-kits with a specific implementation. An individual using the tool could potentially remotely connect to and communicate with a car from an unauthorized remote device, sending audio to the speakers and receiving audio from the microphone in the remote device. Without specialized equipment, someone using the tool must be within a 10 meter range of the targeted car while running a laptop with the car whisperer tool. The security researchers’ goal was to highlight an implementation weakness in a select number of Bluetooth enabled car-kits and pressure manufacturers to better secure Bluetooth enabled devices.
Is Bluetooth wireless technology susceptible to hackers in other ways?Currently, the attacks listed on this page are the only known possibilities for hacking into a limited amount of products on the market, if appropriate measures are taken such as having security turned on and using reasonably long PIN codes or pairing devices in private. The Bluetooth SIG continues to study security risks associated with the technology and determine their viability as the technology spreads and develops.
What can consumers do to protect their data?Consumers can do a number of things to protect their data. If users have a phone that is vulnerable to bluesnarfing or bluebugging, they should contact the phone's manufacturer or take the phone to a manufacturer authorized service point. The manufacturers of the vulnerable devices have developed software patches to fix the vulnerability. In addition, if users are still concerned about a device being targeted, they can turn the device to non-discoverable mode when not using Bluetooth wireless technology and in unknown areas. Users can also ensure their data is secure by not "pairing" with unknown devices. If a user were to receive an invitation to pair with another device, and asked to put in a PIN code, but was unsure of what device was inviting to pair, the user should not pair. Only pair with known devices.
What is the cabir worm? Which devices does the cabir worm affect?The cabir worm is malicious software, also known as malware. When installed on a phone, it uses Bluetooth technology to send itself to other similarly vulnerable devices. Due to this self-replicating behavior, it is classified as a worm. The cabir worm currently only affects mobile phones that use the Symbian series 60 user interface platform and feature Bluetooth wireless technology. Furthermore, the user has to manually accept the worm and install the malware in order to infect the phone. More information on the cabir worm is available from the software licensing company Symbian and on the websites of F-Secure, McAfee and Symantec.
How does a PIN affect security?The personal identification number (PIN) is a four or more digit alphanumeric code that is temporarily associated with one's products for the purposes of a one time secure pairing. It is recommended that users employ at minimum an eight character or more alphanumeric PIN when possible. Product owners must share that PIN number only with trusted individuals and trusted products for pairing. Without this PIN number, pairing cannot occur. It is always advisable to pair products in areas with relative privacy. Avoid pairing your Bluetooth enabled devices in public. If, for some reason, your devices become unpaired, wait until you are in a secure, private location before repairing your devices.
Why does pairing in a public location potentially introduce a security risk?Theoretically a hacker can monitor and record activities in the frequency spectrum and then use a computer to regenerate the PIN codes being exchanged. This requires specially built hardware and thorough knowledge of Bluetooth systems. By using a PIN code with eight or more alphanumeric characters it would take the hacker years to discover the PIN. By using a four digit numeric PIN code, the hacker could discover the PIN in a matter of a few hours. Still advanced software is required.
What is denial of service (DoS)?The well known denial of service (DoS) attack, which has been most popular for attacking internet web sites and networks, is now an option for hackers of Bluetooth wireless technology enabled devices. This nuisance is neither original nor ingenious and is, very simply, a constant request for response from a hacker’s Bluetooth enabled computer (with specific software) to another Bluetooth enabled device such that it causes some temporary battery degradation in the receiving device. While occupying the Bluetooth link with invalid communication requests, the hacker can temporarily disable the product’s Bluetooth services.
What devices are vulnerable to attacks, and what is the Bluetooth SIG doing about it?DoS attacks can be performed on any discoverable Bluetooth enabled device but in some cases, advanced hackers can determine the address of a non-discoverable Bluetooth device. The Bluetooth SIG takes all security issues seriously, and we constantly work to make the specification more secure. Therefore, future Bluetooth core specifications are planned to include features that will make it impossible to penetrate non-discoverable devices. There are also ways for manufacturers to reduce the risk of DoS attacks at the implementation level of Bluetooth wireless technology.
What is the risk of being on the receiving end of a DoS attack?To date, DoS attacks on Bluetooth devices have only been conducted in laboratory tests. The risk of an attempted DoS attack should be considered minimal given the requirements and the normally short range of Bluetooth wireless technology.
Protecting your Bluetooth Devices
While Bluetooth wireless technology is fundamentally secure, keeping Bluetooth enabled devices secure is a team effort. The Bluetooth SIG, manufacturers, and you, the user, each has a role in ensuring the security of your Bluetooth enabled devices. As a user of Bluetooth enabled devices, you should understand the basics of securely using a PDA,
phone, etc. that can connect wirelessly to other consumer electronics.Bluetooth wireless technology has, from its inception, put great emphasis on wireless security so that users of this global standard can feel secure while making their connections. The Bluetooth Special Interest Group (SIG), made up of over 4000 member manufacturers, has a Bluetooth security experts group made up of engineers from its member companies, which provide critical security information and feedback that is taken into account as the Bluetooth wireless specification evolves. .
Future Trend
The next version of Bluetooth technology, currently code-named Lisbon, includes a number of features to increase security, usability and value of Bluetooth. The following features are defined:
· Automatic Encryption Change - allows encrypted links to change their encryption keys periodically, increasing security, and also allowing role switches on an encrypted link
· Extended Inquiry Response - provides more information during the inquiry procedure to allow better filtering of devices before connection. This information includes the name of the device, and a list of services, with other information.
· Sniff Subrating - reducing the power consumption when devices are in the sniff low-power mode, especially on links with asymmetric data flows. Human interface devices (HID) are expected to benefit the most, with mice and keyboards increasing the battery life from 3 to 10 times those currently used.
· QoS Improvements - these will enable audio and video data to be transmitted at a higher quality, especially when best effort traffic is being transmitted in the same piconet.
· Simple Pairing - this improvement will radically improve the pairing experience for Bluetooth devices, while at the same time increasing the use and strength of security. It is expected that this feature will significantly increase the use of Bluetooth.
· Bluetooth technology already plays a part in the rising Voice over IP (VOIP) scene, with Bluetooth headsets being used as wireless extensions to the PC audio system. As VOIP becomes more popular, and more suitable for general home or office users than wired phone lines, Bluetooth may be used in Cordless handsets, with a base station connected to the Internet link.
· The version of Bluetooth after Lisbon, code-named Seattle, has many of the same features, but is most notable for plans to adopt Ultra-wideband radio technology. This will allow Bluetooth use over UWB radio, enabling very fast data transfers, synchronizations and file pushes, while building on the very low power idle modes of Bluetooth. The combination of a radio using little power when no data is transmitted, and a high data rate radio used to transmit bulk data, could be the start of software radios. Bluetooth, given its worldwide regulatory approval, low power operation, and robust data transmission capabilities, provides an excellent signalling channel to enable the soft radio concept.
Subscribe to:
Posts (Atom)