Hello Guest. Sign Up to view and download full seminar reports               


Computer/IT Topics Category

TigerSharc Processor

Added on: February 25th, 2020 by Afsal Meerankutty No Comments

In the past three years several multiple data path and pipelined digital signal processors have been introduced into the marketplace. This new generation of DSP’s takes advantage of higher levels of integrations than were available for their predecessors. The Tiger SHARC processor is the newest and most power member of this family which incorporates many mechanisms like SIMD, VLIW and short vector memory access in a single processor. This is the first time that all these techniques have been combined in a real time processor.

              The TigerSHARC DSP is an ultra high-performance static superscalar architecture that is optimized for tele-communications infrastructure and other computationally demanding applications. This unique architecture combines elements of RISC, VLIW, and standard DSP processors to provide native support for 8, 16, and 32-bit fixed, as well as floating-point data types on a single chip.

               Large on-chip memory, extremely high internal and external bandwidths and dual compute blocks provide the necessary capabilities to handle a vast array of computationally demanding, large signal processing tasks.

Plagiarism Detection Of Images

Added on: February 23rd, 2020 by Afsal Meerankutty No Comments

“Plagiarism is defined as presenting someone else’s work as your own. Work means any intellectual output, and typically includes text, data, images, sound or performance.”Plagiarism is the unacknowledged and inappropriate use of the ideas or wording of another writer. Because plagiarism corrupts values in which the university community is fundamentally committed – the pursuit of knowledge, intellectual honesty – plagiarism is considered a grave violation of academic integrity and the sanctions against it are correspondingly severe. Plagiarism can be characterized as “academic theft.”
                          CBIR or Content Based Image Retrieval is the retrieval of images based on visual features such as colour, texture and shape. Reasons for the development of CBIR systems is that in many large image databases, traditional methods of image indexing have proven to be insufficient, laborious, and extremely time consuming. These old methods of image indexing, ranging from storing an image.
                          In the database and associating it with a keyword or number, to associate it with a categorized description, has become obsolete. In CBIR, each image that is stored in the database has its features extracted and compared to the features of the query.
                          Feature (content) extraction is the basis of content based Image Retrieval. In broad sense, features may include both text based features (keywords, annotations, etc) and visual features (colour, texture, shape, faces, etc). Within the visual feature scope, the features can be further classified as general features and domain specific features. The former include colour, texture and shape features while the latter is application dependent and may include, for example, human faces and finger prints.


Added on: February 17th, 2020 by Afsal Meerankutty 2 Comments

Secure Internet Live Chat Protocol (SILC)

The Secure Internet Live Conferencing (SILC) protocol is a new generation chat protocol which provides full featured conferencing services, just like any other contemporary chat protocol provides. In addition, it provides security by encrypting and authenticating the messages in the network. The security has been the primary goal of the SILC protocol and the protocol has been designed from the day one security in mind. All packets and messages travelling in the SILC Network are always encrypted and authenticated. The network topology is also different from for example IRC network. The SILC network topology attempts to be more powerful and scalable than the IRC network. The basic purpose of the SILC protocol is to provide secure conferencing services. The SILC Protocol have been developed as Open Source project. The protocol specifications are freely available and they have been submitted to the IETF. The very first implementations of the protocol are also already available.

SILC provides security services that any other conferencing protocol does not offer today. The most popular conferencing service, IRC, is entirely insecure. If you need secure place to talk to some person or to group of people over the Internet, IRC or any other conferencing service, for that matter, cannot be used. Anyone can see the messages and their contents in the IRC network. And the most worse case, some is able to change the contents of the messages. Also, all the authentication data, such as, passwords are sent plaintext in IRC.

SILC is much more than just about `encrypting the traffic’. That is easy enough to do with IRC and SSL hybrids, but even then the entire network cannot be secured, only part of it. SILC provides security services, such as sending private messages entirely secure; no one can see the message except you and the real receiver of the message. SILC also provides same functionality for channels; no one except those clients joined to the channel may see the messages destined to the channel. Communication between client and server is also secured with session keys and all commands, authentication data (such as passwords etc.) and other traffic is entirely secured. The entire network, and all parts of it, is secured. We are not aware of any other conferencing protocol providing same features at the present time.

Remote Frame Buffer (RFB) Protocol

Added on: January 30th, 2020 by Afsal Meerankutty No Comments

Remote Desktop Softwares are those softwares which provide Remote Access. Remote Access means the ability of a user to log onto a remote computer or network from a distant location. This usually comprises of computers, a Network, and some remote access software to connect to the network. Since this involves clients and servers connected across a network, a protocol is essential for efficient communication between them. RFB protocol is one of such protocols which is used by the client and servers for communicating with each other and thereby making Remote Access possible. The purpose of this Paper is to give a general idea as to how this Protocol actually works. This paper also gives a broad idea about the various messages of this protocol and how these messages are send and interpreted by the client and server modules. This Paper also includes a Simple implementation of the Protocol which shows the various messages and methods and how this protocol is practically used for gaining remote access.

RFB (remote framebuffer) is a simple and efficient protocol which provide remote access to graphical user interfaces.As its Name Suggests it works at the framebuffer level and thus it is applicable to all windowing systems and applications. Eg. X11, Windows and Macintosh. It should also be noted that there are other Protocols available and RFB is the protocol which is used in Virtual Network Computing (VNC) and its various forms. Due to increase in number of Software products and Services such protocols play a very important role nowadays.

Evolution of SIM to eSIM

Added on: October 31st, 2018 by Afsal Meerankutty No Comments

Every GSM (Global System for Mobile Communications) phone, also called 2G mobile phone and every UMTS (Universal Mobile Telecommunications System) phone, aka 3G mobile phone requires a smart card to connect and function in the mobile network. This smart card is called SIM, which stands for Subscriber Identity Module. In fact, this module contains the International Mobile Subscriber Identity (IMSI) and credentials that are necessary for the identification and authentication of the subscriber. Without the SIM the user will not be allowed to connect to the
mobile network and hence not able to make or receive phone calls.
As a smart card the SIM is a tampered resistant microprocessor card with its own operating system, storage and built-in security features that prevent unauthorized individual to access, retrieve, copy or modify the subscriber IMSI and credentials. Abuses of subscriber’s account and fraudulent accesses to the mobile network can hence be avoided. Furthermore, as a removable and autonomous module the SIM introduces great flexibility since the user can easily move the SIM to other mobile phones or replace a SIM with another one. So far, the smart card and its content, the SIM are bound together and called SIM.
With the advances in wireless and storage technology, new demands have arisen. Because of cumbersome task of opening machines and installing the removable SIM, the M2M applications are designed with pre-installed SIM application. The M2M applications based on the cellular networks with the ability of installing the user subscription have advantages and disadvantages for a certain stakeholder.
This master’s thesis provides the multiple alternative solutions to this installation and also describe the SIM evolutions i.e. eUICC and soft SIM to give a comprehensive view of the SIM’s situation. The thesis also presents the security assessment of these evolutions which are different with the current removable SIM.

Security in Apple iOS

Added on: March 7th, 2018 by Afsal Meerankutty No Comments

Apple iOS has been a very advanced and sophisticated mobile operating system ever since it was first released in 2007. In this seminar paper, we will first focus on introducing iOS security by talking about the implementation details of its essential building blocks, such as system security, data security, hardware security and app security. Then we will talk about some potential and existing security issues of iOS.

iOS has been one of the most popular mobile operating system in the world ever since it was first released in 2007. As of June 2014, Apple App Store contained more than 1.2 million iOS applications, which have collectively been downloaded more than 60 billion times.  iOS was designed and created by Apple Inc, it is distributed exclusively for Apple hardware. iOS protects not only the data stored in the iOS device, but also the data transmitted on networks when using internet services. iOS provides advanced and sophisticated security for iOS devices and it’s also very easy to use. Users do not need to spend a lot of time on security configurations, as most of the security features have been automatically configured by iOS. iOS also supports biometric authentication (Touch ID), which has recently been incorporated into iOS devices, users can easily use their fingerprints to perform private and sensitive tasks such as unlocking the iPhone and making payments. This survey talks about the details about how the security elements are implemented in iOS and some issues with the iOS security.

Medical Imaging

Added on: January 10th, 2017 by Afsal Meerankutty No Comments

The increasing capabilities of medical imaging devices have strongly facilitated diagnosis and surgery planning. During the last decades, the technology has evolved enormously, resulting in a never-ending flow of high-dimensional and high-resolution data that need to be visualized, analyzed, and interpreted. The development of computer hardware and software has given invaluable tools for performing these tasks, but it is still very hard to exclude the human operator from the decision making. The process of stating a medical diagnosis or to conduct a surgical planning is simply too complex to fully automate. Therefore, interactive or semi-automatic methods for image analysis and visualization are needed where the user can explore the data efficiently and provide his or her expert knowledge as input to the methods.

All software currently being written for medical imaging systems have to conform to the DICOM (Digital Imaging in Communication in Medicine) standards to ensure that different systems from different vendors can successfully share information).so, you can, for example, acquire the image from a Siemens viewing station and do the processing on Philips multimodal stations (the same station being able to easily process say MRI as well as CAT scan images) are already in common use. Vendors are also able to send private information that only their software and viewing stations can read, so as to enhance their equipment. For example a Philips acquisition system can acquire and transmit more information than prescribed by the standard. Such extra information can be deciphered only by the standard. Even though the basic job is that of image processing, the algorithms used in medical software can be vastly different from say those used in other commercial image manipulation software like movie software or Photoshop. The reason behind this is that medical systems have to preserve a very high degree of accuracy and detail or there could be fatal results.

Smart Quill

Added on: January 3rd, 2017 by Afsal Meerankutty No Comments

Lyndsay Williams of Microsoft Research’s Cambridge UK lab is the inventor of the Smartquill, a pen that can remember the words that it is used to write, and then transform them into computer text . The idea that “it would be neat to put all of a handheld-PDA type computer in a pen,” came to the inventor in her sleep. “It’s the pen for the new millennium,” she says. Encouraged by Nigel Ballard, a leading consultant to the mobile computer industry, Williams took her prototype to the British Telecommunications Research Lab, where she was promptly hired and given money and institutional support for her project. The prototype, called SmartQuill, has been developed by world-leading research laboratories run by BT (formerly British Telecom) at Martlesham, eastern England. It is claimed to be the biggest revolution in handwriting since the invention of the pen.

With the introduction of handheld computers, the present trend has started preferring small computers to do computation. This has made computer manufacturers to go for almost gadget like computers. Reducing the size of handheld computers can only be taken so far before they become unusable. Keyboards become so tiny you require needle-like fingers to operate them and screen that need constant cursor controls to read simple text.

The introduction of SmartQuill has solved some of these problems. Lyndsay Williams of Microsoft, UK is the inventor of Smart Quill, a pen that can remember the words that is used to write, and then transform them into computer text. The pen is slightly larger than ordinary fountain pen, with a screen on the barrel. User can enter information into these applications by pushing a button .Information can be entered using his/her own handwriting. User can use any platform for writing like paper, screen, tablet or even air. There is also a small three-line screen to read the information stored in the pen. Users can scroll down the screen by tilting the pen. The pen is then plugged in to an electronic docking station, text data is transmitted to a desktop computer, printer, and modem or to mobile telephones to send files electronically.

Graphical Password Authentication

Added on: December 29th, 2016 by Afsal Meerankutty No Comments

The most common computer authentication method is to use alphanumerical usernames and passwords. This method has been shown to have significant drawbacks. For example, users tend to pick passwords that can be easily guessed. On the other hand, if a password is hard to guess, then it is often hard to remember.

To address this problem, some researchers have developed authentication methods that use pictures as passwords. In this paper, we conduct a comprehensive survey of the existing graphical password techniques. We classify these techniques into two categories: recognition-based and recall-based approaches. We discuss the strengths and limitations of each method and point out the future research directions in this area.

We also try to answer two important questions: “Are graphical passwords as secure as text-based passwords?”; “What are the major design and implementation issues for graphical passwords”. In this paper, we are conducting a comprehensive survey of existing graphical image password authentication techniques. Also we are here proposing a new technique for graphical authentication.

Wearable Computers

Added on: November 3rd, 2013 by Afsal Meerankutty 4 Comments

As computers move from the desktop, to the palm top, and onto our bodies and into our everyday lives, infinite opportunities arise to realize applications that have never before been possible.To date, personal computers have not lived up to their name. Most machines sit on a desk and interact with their owners only a small fraction of the day. A person’s computer should be worn, much as eyeglasses or clothing are worn, and interact with the user based on the context of the situation. With the current accessibility of wireless local area networks, and the host of other context sensing and communication tools available, coupled with the current scale of miniaturization, it is becoming clear that the computer should act as an intelligent assistant, whether it be through a remembrance agent, augmented reality, or intellectual collectives. It is also important that a computer be small, such as something we could slip into our pocket, or even better wear like a piece of clothing. It is rapidly becoming apparent that the next technological leap is to integrate the computer and the user in a non-invasive manner, this leap will bring us into the fascinating world of Wearable Computers.


Added on: November 3rd, 2013 by Afsal Meerankutty 10 Comments

DakNet provides extraordinarily low-cost digital communication, letting remote villages leapfrog past the expense of traditional connectivity solutions and begin development of a full coverage broadband wireless infrastructure. DakNet, an ad hoc network that uses wireless technology to provide asynchronous digital connectivity, is evidence that the marriage of wireless and asynchronous service may indeed be the beginning of a road to universal broadband connectivity.
This paper briefly explains about what are DakNet, how wireless technology implemented with DakNet, its fundamental operations and its applications, cost estimation, advantages and disadvantages and finally how to connect Indian villages with town city and global markets.

Space Time Adaptive Processing

Added on: October 31st, 2013 by Afsal Meerankutty 3 Comments

Space-time adaptive processing (STAP) is a signal processing technique most commonly used in radar systems. It involves adaptive array processing algorithms to aid in target detection. Radar signal processing benefits from STAP in areas where interference is a problem (i.e. ground clutter, jamming, etc.). Through careful application of STAP, it is possible to achieve order-of-magnitude sensitivity improvements in target detection.
STAP involves a two-dimensional filtering technique using a phased-array antenna with multiple spatial channels. Coupling multiple spatial channels with pulse-Doppler waveforms lends to the name “space-time.” Applying the statistics of the interference environment, an adaptive STAP weight vector is formed. This weight vector is applied to the coherent samples received by the radar.
In a ground moving target indicator (GMTI) system, an airborne radar collects the returned echo from the moving target on the ground. However, the received signal contains not only the reflected echo from the target, but also the returns from the illuminated ground surface. The return from the ground is generally referred to as clutter.
The clutter return comes from all the areas illuminated by the radar beam, so it occupies all range bins and all directions. The total clutter return is often much stronger than the returned signal echo, which poses a great challenge to target detection. Clutter filtering, therefore, is a critical part of a GMTI system.


Added on: October 31st, 2013 by Afsal Meerankutty 2 Comments

A biosensor is a device for the detection of an analytic that combines a biological component with a physicochemical detector component. Many optical biosensors based on the phenomenon of surface plasmon resonance are evanescent wave techniques . The most widespread example of a commercial biosensor is the blood glucose biosensor, which uses the enzyme glucose oxidase to break blood glucose down.
Bio sensors are the combination of bio receptor and transducer. The bio receptor is a biomolecule that identifies the target whereas transducer converts the identified target into the measurable signal. Biosensors are used in the market in many diverse areas. They are also used in the clinical test in one of the biggest diagnostic market of 4000 million in US$.
They are very useful to measure the specific thing with great accuracy. Its speed can be directly measured. They are very simple. Receptors and transducer are integrated into single sensors without using reagents.

Facial Recognition System

Added on: October 29th, 2013 by Afsal Meerankutty 1 Comment

Wouldn’t you love to replace password based access control to avoid having to reset forgotten password and worry about the integrity of your system? Wouldn’t you like to rest secure in comfort that your healthcare system does not merely on your social security number as proof of your identity for granting access to your medical records?
Because each of these questions is becoming more and more important, access to a reliable personal identification is becoming increasingly essential. Conventional method of identification based on possession of ID cards or exclusive knowledge like a social security number or a password are not all together reliable. ID cards can be lost forged or misplaced; passwords can be forgotten or compromised. But a face is undeniably connected to its owner. It cannot be borrowed stolen or easily forged. Then comes the importance of  Facial Recognition System.


Added on: October 23rd, 2013 by Afsal Meerankutty 2 Comments

IPv6: The Next Generation Protocol
Internet Protocol version 6 (IPv6) has come with a package of advantages including simple header format, very large address space and extensibility. However, IPv6 packets transmission still uses the traditional infrastructure of protocol stacks such as TCP/IP. Thus, the big advantages cannot be taken optimally. One of the limitations of TCP/IP is duplication of error detection code verification and regeneration in Data Link layer. Every router has to verify CRC code at incoming port and regenerate the CRC code at outgoing port before forward an IPv6 packet to the next router. With advance networking technology this is a time consuming task. This paper proposes CRC Extension Header (CEH) to do error detection in Network layer and replaces the current error detection in Data Link layer. In CEH, verification of CRC code is only done in the final destination indicated by destination address field of IPv6 header. Experimentation results showed network latency of IPv6 packets transmission decreases 68%.

Automatic Teller Machine

Added on: October 23rd, 2013 by No Comments

Automatic Teller Machine
An Automatic Teller Machine (ATM) is a machine permitting a Bank’s customers to make cash withdrawals and check their account at any time and without the need for a human teller. Many ATMs also allow people to deposit cash or cheques and transfer money between their bank accounts.

You’re short on cash, so you walk over to the automated teller machine (ATM), insert your card into the card reader, respond to the prompts on the screen, and within a minute you walk away with your money and a receipt. These machines can now be found at most supermarkets, convenience stores and travel centers. Have you ever wondered about the process that makes your bank funds available to you at an ATM on the other side of the country?

E-Paper Technology

Added on: October 10th, 2013 by 1 Comment

E-paper is a revolutionary material that can be used to make next generation ; electronic displays. It is portable reusable storage and display medium that look like paper but can be repeatedly written one thousands of times. These displays make the beginning of a new area for battery power information applications such as cell phones, pagers, watches and hand-held computers etc.

Two companies are carrying pioneering works in the field of development of electronic ink and both have developed ingenious methods to produce electronic ink. One is E-ink, a company based at Cambridge, in U.S.A. The other company is Xerox doing research work at the Xerox’s Palo Alto Research Centre. Both technologies being developed commercially for electronically configurable paper like displays rely on microscopic beads that change color in response to the charges on nearby electrodes.

Like traditional paper, E-paper must be lightweight, flexible, glare free and low cost. Research found that in just few years this technology could replace paper in many situations and leading us ink a truly paperless world.

Haptic Systems

Added on: October 10th, 2013 by 5 Comments

‘Haptics’ is a technology that adds the sense of touch to virtual environments. Users are given the illusion that they are touching or manipulating a real physical object.
This seminar discusses the important concepts in haptics, some of the most commonly used haptics systems like ‘Phantom’, ‘Cyberglove’, ‘Novint Falcon’ and such similar devices. Following this, a description about how sensors and actuators are used for tracking the position and movement of the haptic systems, is provided.
The different types of force rendering algorithms are discussed next. The seminar explains the blocks in force rendering. Then a few applications of haptic systems are taken up for discussion.

IP Spoofing

Added on: October 9th, 2013 by 1 Comment

IP spoofing is a method of attacking a network in order to gain unauthorized access. The attack is based on the fact that Internet communication between distant computers is routinely handled by routers which find the best route by examining the destination address, but generally ignore the origination address. The origination address is only used by the destination machine when it responds back to the source.

In a spoofing attack, the intruder sends messages to a computer indicating that the message has come from a trusted system. To be successful, the intruder must first determine the IP address of a trusted system, and then modify the packet headers to that it appears that the packets are coming from the trusted system.

Finger Scan Technology

Added on: October 1st, 2013 by No Comments

Reliable user authentication is becoming an increasingly important task in the Web-enabled world. The consequences of an insecure authentication system in a corporate or enterprise environment may include loss of confidential information, denial of service, and compromised data integrity. The prevailing techniques of user authentication, which involve the use of either passwords and user IDs (identifiers), or identification cards and PINs (personal identification numbers), suffer from several limitations. Once an intruder acquires the user ID and the password, the intruder has total access to the user’s resources.
Fortunately, automated biometrics in general, and Fingerprint Technology in particular, can provide a much more accurate and reliable user authentication method. Biometrics is a rapidly advancing field that is concerned with identifying a person based on his or her physiological or behavioral characteristics. Examples of automated biometrics include fingerprint, face, iris, and speech recognition. Because a biometric property is an intrinsic property of an individual, it is difficult to surreptitiously duplicate and nearly impossible to share. The greatest strength of biometrics, the fact that the biometrics does not change over time, is at the same time its greatest liability. Once a set of biometric data has been compromised, it is compromised forever using Finger Scan Technology.

Adding Intelligence to Internet Using Satellites

Added on: September 30th, 2013 by 2 Comments

Two scaling problems face the Internet today. First, it will be years before terrestrial networks are able to provide adequate bandwidth uniformly around the world, given the explosive growth in Internet bandwidth demand and the amount of the world that is still unwired. Second, the traffic distribution is not uniform worldwide: Clients in all countries of the world access content that today is chiefly produced in a few regions of the world (e.g., North America). A new generation of Internet access built around geosynchronous satellites can provide immediate relief. The satellite system can improve service to bandwidth-starved regions of the globe where terrestrial networks are insufficient and supplement terrestrial networks elsewhere. This new generation of satellite system manages a set of satellite links using intelligent controls at the link endpoints. The intelligence uses feedback obtained from monitoring end-user behavior to adapt the use of resources. Mechanisms controlled include caching, dynamic construction of push channels, use of multicast, and scheduling of satellite bandwidth. This paper discusses the key issues of using intelligence to control satellite links, and then presents as a case study the architecture of a specific system: the Internet Delivery System, which uses INTELSAT’s satellite fleet to create Internet connections that act as wormholes between points on the globe.

Augmented Reality

Added on: September 28th, 2013 by No Comments

Augmented Reality (AR) is a growing area in virtual reality research. Computer graphics have become much more sophisticated, and game graphics are pushing the barriers of photo realism. Now, researchers and engineers are pulling graphics out of your television screen or computer display and integrating them into real-world environments. This new technology blurs the line between what’s real and what’s computer-generated by enhancing what we see, hear, feel and smell.
The basic idea of augmented reality is to superimpose graphics, audio and other sensory enhancements over a real-world environment in real time. An augmented reality system generates a composite view for the user. It is a combination of the real scene viewed by the user and a virtual scene generated by the computer that augments the scene with additional information.

5 Pen PC Technology

Added on: September 27th, 2013 by 2 Comments

5 Pen PC Technology called as P-ISM (“Pen-style Personal Networking Gadget Package”), which is nothing but the new discovery which is under developing stage by NEC Corporation. At the 2003 ITU Telecom World exhibition held in Geneva, the Tokyo-based NEC Corporation displayed a conceptual $30,000 prototype of P-ISM. It is simply a new invention in computer and it is associated with communication field. Surely this will have a great impact on the computer field. In this device you will find Bluetooth as the main interconnecting device between different peripherals. P-ISM is a gadget package including five functions: a pen-style cellular phone with a handwriting data input function, virtual keyboard, a very small projector, camera scanner, and personal ID key with cashless pass function. P-ISMs are connected with one another through short-range wireless technology. The whole set is also connected to the Internet through the cellular phone function. This personal gadget in a minimalist pen style enables the ultimate ubiquitous computing.

Ad hoc Networks

Added on: September 25th, 2013 by No Comments

Recent advances in portable computing and wireless technologies are opening up exciting possibilities for the future of wireless mobile networking. A mobile ad hoc network (MANET) is an autonomous system of mobile hosts connected by wireless links. Mobile networks can be classified into infrastructure networks and mobile ad hoc networks according to their dependence on fixed infrastructures. In an infrastructure mobile network, mobile nodes have wired access points (or base stations) within their transmission range.
The access points compose the backbone for an infrastructure network. In contrast, mobile ad hoc networks are autonomously self-organized networks without infrastructure support. In a mobile ad hoc network, nodes move arbitrarily, therefore the network may experiences rapid and unpredictable topology changes. Additionally, because nodes in a mobile ad hoc network normally have limited transmission ranges, some nodes cannot communicate directly with each other. Hence, routing paths in mobile ad hoc networks potentially contain multiple hops, and every node in mobile ad hoc networks has the responsibility to act as a router.

Mobile ad hoc networks originated from the DARPA Packet Radio Network (PRNet) and SURAN project. Being independent on pre-established infrastructure, mobile ad hoc networks have advantages such as rapid and ease of deployment, improved flexibility and reduced costs.

Intelligent Transportation System

Added on: March 27th, 2012 by 2 Comments

National highway ITS has been shifted to “Integrated Road Transportation Management, which incorporates road safety and management technologies, from traditional “Transportation Management”. This paper describes the road management system which is suitable to national highway ITS. When efficient integrated road transportation management system (transportation + road management) is developed by introducing road management system to national highway ITS, reduction in traffic congestion cost, travel time and traffic accident and improvement of road management are expected thanks to integration of road safety and management technology. And based on automobiles and mobile phones distributed in Korea, the creation of new market in telematics and ubiquitous area is highly expected.
Keywords: Advanced Road Management System, Intelligent Transportation System (ITS), IT technology

Light Peak

Added on: March 27th, 2012 by No Comments

Light Peak is the code name for a new high-speed optical cable technology designed to connect electronic devices to each other. Light Peak delivers high bandwidth starting at 10Gb/s with the potential ability to scale to 100Gb/s over the next decade. At 10Gb/s, we can transfer a full-length Blu-Ray movie in less than 30 seconds. Light peak allows for smaller connectors and longer, thinner, and more flexible cables than currently possible. Light Peak also has the ability to run multiple protocols simultaneously over a single cable, enabling the technology to connect devices such as peripherals, displays, disk drives, docking stations, and more.

Free Space Optics

Added on: March 26th, 2012 by No Comments

Free space optics (FSO) is a line-of-sight technology that currently enables optical transmission up to 2.5 Gbps of data, voice, and video communications through the air, allowing optical connectivity without deploying fiber optic cables or securing spectrum licenses. FSO system can carry full duplex data at giga bits per second rates over Metropolitan distances of a few city blocks of few kms. FSO, also known as optical wireless, overcomes this last-mile access bottleneck by sending high –bitrate signals through the air using laser transmission.

Security Issues in Grid Computing

Added on: March 25th, 2012 by No Comments

The last decade has seen a considerable increase in commodity computer and network performance mainly as a result of faster hardware and more sophisticated software. Nevertheless there are still problems, in the fields of science, engineering and business, which cannot be dealt effectively with the current generation of super computers. In fact, due to their size and complexity, these problems are often numerically and/or data intensive and require a variety of heterogeneous resources that are not available from a single machine. A number of teams have conducted experimental studies on the cooperative use of geographically distributed resources conceived as a single powerful computer. The new approach is known by several names such as, Metacomputing, seamless scalable computing, global computing and more recently Grid Computing.

The early efforts in Grid Computing started as a project to link super computing sites, but now it has grown far beyond its original intent. The rapid and impressive growth of internet has become an attractive means of sharing information across the globe. The idea of grid computing has emerged from the fact that, internet can also be used for several other purposes such as sharing the computing power, storage space, scientific devices and software programs. The term “Grid” is chosen as it is analogous to Electrical Power Grid where it provides consistent, pervasive and ubiquitous power irrespective of its source. The main aim of this paper is to present the state- of-the-art and issues in Grid computing.

This paper aims to present the state-of-the-art of Grid computing and attempts to survey the major international efforts in developing this emerging technology

Optical Fiber Communication System

Added on: March 25th, 2012 by No Comments

Communication is an important part of our daily life. The communication process involves information generation, transmission, reception and interpretation. As needs for various types of communication such as voice, images, video and data communications increase demands for large transmission capacity also increase. This need for large capacity has driven the rapid development of light wave technology; a worldwide industry has developed. An optical or light wave communication system is a system that uses light waves as the carrier for transmission. An optical communication system mainly involves three parts. Transmitter, receiver and channel. In optical communication transmitters are light sources, receivers are light detectors and the channels are optical fibers. In optical communication the channel i.e, optical fibers play an important role because it carries the data from transmitter to the receiver. Hence, here we shall discuss mainly about optical fibers.

Humanoid Robot

Added on: March 23rd, 2012 by 4 Comments

The field of humanoids robotics is widely recognized as the current challenge for robotics research .The humanoid research is an approach to understand and realize the complex real world interactions between a robot, an environment, and a human. The humanoid robotics motivates social interactions such as gesture communication or co-operative tasks in the same context as the physical dynamics. This is essential for three-term interaction, which aims at fusing physical and social interaction at fundamental levels.

People naturally express themselves through facial gestures and expressions. Our goal is to build a facial gesture human-computer interface fro use in robot applications. This system does not require special illumination or facial make-up. By using multiple Kalman filters we accurately predict and robustly track facial features. Since we reliably track the face in real-time we are also able to recognize motion gestures of the face. Our system can recognize a large set of gestures (13) ranging from “yes”, ”no” and “may be” to detecting winks, blinks and sleeping.


Added on: March 20th, 2012 by No Comments

Secure Socket Layer (SSL) denotes the predominant security protocol of the Internet for World Wide Web (WWW) services relating to electronic commerce or home banking.

The majority of web servers and browsers support SSL as the de-facto standard for secure client-server communication. The Secure Socket Layer protocol builds up point-to-point connections that allow private and unimpaired message exchange between strongly authenticated parties.

In the ISO/OSI reference model [ISO7498], SSL resides in the session layer between the transport layer (4) and the application layer (7); with respect to the Internet family of protocols this corresponds to the range between TCP/IP and application protocols such as HTTP, FTP, Telnet, etc. SSL provides no intrinsic synchronization mechanism; it relies on the data link layer below.

The SSL protocol allows mutual authentication between a client and server and the establishment of an authenticated and encrypted connection. SSL runs above TCP/IP and below HTTP, LDAP, IMAP, NNTP, and other high-level network protocols.

Shallow Water Acoustic Networks

Added on: March 18th, 2012 by No Comments

Shallow water acoustic network s are generally formed by acoustically connected ocean bottom sensor nodes, autonomous underwater vehicles (AUVs), and surface stations that serve as gateways and provide radio communication link s to on -shore stations. The QoS of such network s is limited by the low bandwidth of acoustic transmission channels, high latency resulting from the slow propagation of sound, and elevated noise levels in some environments. The long -term goal in the design of underwater acoustic network s is to provide for a self- configuring network of distributed nodes with network link s that automatically adapt to the environment through selection of the optimum system parameters. Here considers several aspects in the design of shallow water acoustic network s that maximize throughput and reliability while minimizing power consumption And In the last two decades, underwater acoustic communications has experienced significant progress. The traditional approach for ocean -bottom or ocean-column monitoring is to deploy oceanographic sensors, record the data, and recover the instruments. But this approach failed in real -time monitoring. The ideal solution for real -time monitoring of selected ocean areas for long periods of time is to connect various instruments through wireless link s within a network structure. And the Basic underwater acoustic network s are formed by establishing bidirectional acoustic communication between nodes such as autonomous underwater vehicles (AUVs) and fixed sensors. The network is then connected to a surface station, which can further be connected to terrestrial networks such as the Internet.

Under water sensor network, acoustic network, acoustic communication architectures.

Confidential Data Storage and Deletion

Added on: March 18th, 2012 by No Comments

With the decrease in cost of electronic storage media, more and more sensitive data gets stored in those media. Laptop computers regularly go missing, either because they are lost or because they are stolen. These laptops contain confidential information, in the form of documents, presentations, emails, cached data, and network access credentials. This confidential information is typically far more valuable than the laptop hardware, if it reaches right people. There are two major aspects to safeguard the privacy of data on these storage media/laptops. First, data must be stored in a confidential manner. Second, we must make sure that confidential data once deleted can no longer be restored. Various methods exist to store confidential data such as encryption programs, encryption file system etc. Microsoft BitLocker Drive Encryption provides encryption for hard disk volume and is available with Windows Vista Ultimate and Enterprise editions. This seminar describes the most commonly used encryption algorithm, Advanced Encryption System (AES) which is used for many of the confidential data storage methods. This seminar also describes some of the confidential data erasure methods such as physical destruction, data overwriting methods and Key erasure.

Keywords: Privacy of data, confidential data storage, Encryption, Advanced Encryption Standard (AES), Microsoft Bit Locker, Confidential data erasure, Data overwriting, Key erasure.


Added on: March 15th, 2012 by 6 Comments

Gi-Fi or Gigabit Wireless is the world’s first transceiver integrated on a single chip that operates at 60GHz on the CMOS process. It will allow wireless transfer of audio and video data up to 5 gigabits per second, ten times the current maximum wireless transfer rate, at one-tenth of the cost, usually within a range of 10 meters. It utilizes a 5mm square chip and a 1mm wide antenna burning less than 2m watts of power to transmit data wirelessly over short distance, much like Bluetooth.

Gi-Fi will helps to push wireless communications to faster drive. For many years cables ruled the world. Optical fibers played a dominant role for its higher bit rates and faster transmission. But the installation of cables caused a greater difficulty and thus led to wireless access. The foremost of this is Bluetooth which can cover 9-10mts. Wi-Fi followed it having coverage area of 91mts. No doubt, introduction of Wi-Fi wireless networks has proved a revolutionary solution to “last mile” problem. However, the standard’s original limitations for data exchange rate and range, number of changes, high cost of the infrastructure have not yet made it possible for Wi-Fi to become a total threat to cellular networks on the one hand, and hard-wire networks, on the other. But the man’s continuous quest for even better technology despite the substantial advantages of present technologies led to the introduction of new, more up-to-date standards for data exchange rate i.e., Gi-Fi.

The development will enable the truly wireless office and home of the future. As the integrated transceiver is extremely small, it can be embedded into devices. The breakthrough will mean the networking of office and home equipment without wires will finally become a reality.
In this book we present a low cost, low power and high broadband chip, which will be vital in enabling the digital economy of the future.

ARM Processor

Added on: March 14th, 2012 by 1 Comment

An ARM processor is any of several 32-bit RISC (reduced instruction set computer) microprocessor s developed by Advanced RISC Machines, Ltd. The ARM architecture was originally conceived by Acorn Computers Ltd. in the 1980s. Since then, it has evolved into a family of microprocessors extensively used in consumer electronic devices such as mobile phone s, multimedia players, pocket calculator s and PDA s (personal digital assistants).
ARM processor features include:

  • Load/store architecture
  • An orthogonal instruction set
  • Mostly single-cycle execution
  • A 16×32-bit register
  • Enhanced power-saving design

ARM provides developers with intellectual property (IP) solutions in the form of processors, physical IP, cache and SoC designs, application-specific standard products (ASSPs), related software and development tools — everything you need to create an innovative product design based on industry-standard components that are ‘next generation’ compatible.

Zigbee Technology

Added on: March 14th, 2012 by No Comments

ZigBee is a communication standard that provides a short-range coast effective networking capability. It has bee developed with the emphasis on low-coast battery powered application such as building automation industrial and commercial control etc. Zigbee has been introduced by the IEEE and the zigbee alliance to provide a first general standard for these applications. The IEEE is the Institute of Electrical and Electronics Engineers. They are a non-profit organization dedicated to furthering technology involving electronics and electronic devices. The 802 group is the section of the IEEE involved in network operations and technologies, including mid-sized networks and local networks. Group 15 deals specifically with wireless networking technologies, and includes the now ubiquitous 802.15.1 working group, which is also known as Bluetooth.

The name “ZigBee” is derived from the erratic zigging patterns many bees make between flowers when collecting pollen. This is evocative of the invisible webs of connections existing in a fully wireless environment. The standard itself is regulated by a group known as the ZigBee Alliance, with over 150 members worldwide.
While Bluetooth focuses on connectivity between large packet user devices, such as laptops, phones, and major peripherals, ZigBee is designed to provide highly efficient connectivity between small packet devices. As a result of its simplified operations, which are one to two full orders of magnitude less complex than a comparable Bluetooth device, pricing for ZigBee devices is extremely competitive, with full nodes available for a fraction of the cost of a Bluetooth node.

ZigBee devices are actively limited to a through-rate of 250Kbps, compared to Bluetooth’s much larger pipeline of 1Mbps, operating on the 2.4 GHz ISM band, which is available throughout most of the world.
ZigBee has been developed to meet the growing demand for capable wireless networking between numerous low-power devices. In industry ZigBee is being used for next generation automated manufacturing, with small transmitters in every device on the floor, allowing for communication between devices to a central computer. This new level of communication permits finely-tuned remote monitoring and manipulation. In the consumer market ZigBee is being explored for everything from linking low-power household devices such as smoke alarms to a central housing control unit, to centralized light controls.

The specified maximum range of operation for ZigBee devices is 250 feet (76m), substantially further than that used by Bluetooth capable devices, although security concerns raised over “sniping” Bluetooth devices remotely, may prove to hold true for ZigBee devices as well.

Due to its low power output, ZigBee devices can sustain themselves on a small battery for many months, or even years, making them ideal for install-and-forget purposes, such as most small household systems. Predictions of ZigBee installation for the future, most based on the explosive use of ZigBee in automated household tasks in China, look to a near future when upwards of sixty ZigBee devices may be found in an average American home, all communicating with one another freely and regulating common tasks seamlessly.


Added on: March 14th, 2012 by No Comments

Now that wireless connections are established solutions in various sectors of consumer electronics, the question arises whether devices that draw long life from a small battery could find benefit as well in a global standard for wireless low energy technology. Makers of sensors for sports, health and fitness devices have dabbled in wireless but not together, while manufacturers of products like watches have never even considered adding wireless functionality because no options were available. Several wireless technologies have tried to address the needs of the button cell battery market, but most were proprietary and garnered little industry support. However, none of these technologies let smaller manufacturers plug in to a global standard that provides a viable link with devices like mobile phones and laptops.

However, companies that wants to make their small devices wireless need to build and sell either a dedicated display unit or an adapter that connects to a computing platform such as a mobile phone, PC or iPod. There have been few successful products that followed this route to a mass market. A new flavor of Bluetooth technology may be just the answer, and a more efficient alternative for yet another wireless standard.

In the ten years since engineers from a handful of companies came together to create the first Bluetooth specification, Bluetooth technology has become a household term, a globally recognized standard for connecting portable devices. The Bluetooth brand ranks among the top ingredient technology brands worldwide, recognized by a majority of consumers around the world. A thriving global industry of close to 11,000 member companies now designs Bluetooth products and works together to develop future generations of the technology, found in well over 50 percent of mobile phones worldwide and with more than two billion devices shipped to date. Bluetooth wireless technology has established the standard for usability, ease of setup and compatibility across all manufacturers. A well-established set of Bluetooth profiles define the communication needs for a wide range of applications, making it easy for a manufacturer to add Bluetooth wireless connectivity to new devices — from phones to headsets to printers — with a minimum of programming and testing work.

Pervasive Computing

Added on: March 14th, 2012 by No Comments

Pervasive computing refers to embedding computers and communication in our environment. This provides an attractive vision for the future of computing. The idea behind the pervasive computing is to make the computing power disappear in the environment, but will always be there whenever needed or in other words it means availability and invisibility. These invisible computers won’t have keyboards or screens, but will watch us, listen to us and interact with us. Pervasive computing makes the computer operate in the messy and unstructured world of real people and real objects. Distributed devices in this environment must have the ability to dynamically discover and integrate other devices. The prime goal of this technology is to make human life more simple, safe and efficient by using the ambient intelligence of computers.

Holographic Memory

Added on: March 14th, 2012 by No Comments

Devices that use light to store and read data have been the backbone of data storage for nearly two decades. Compact discs revolutionized data storage in the early 1980s, allowing multi-megabytes of data to be stored on a disc that has a diameter of a mere 12 centimeters and a thickness of about 1.2 millimeters. In 1997, an improved version of the CD, called a digital versatile disc (DVD), was released, which enabled the storage of full-length movies on a single disc.

CDs and DVDs are the primary data storage methods for music, software, personal computing and video. A CD can hold 783 megabytes of data. A double-sided, double-layer DVD can hold 15.9 GB of data, which is about eight hours of movies. These conventional storage mediums meet today’s storage needs, but storage technologies have to evolve to keep pace with increasing consumer demand. CDs, DVDs and magnetic storage all store bits of information on the surface of a recording medium. In order to increase storage capabilities, scientists are now working on a new optical storage method called holographic memory that will go beneath the surface and use the volume of the recording medium for storage, instead of only the surface area. Three-dimensional data storage will be able to store more information in a smaller space and offer faster data transfer times.

Holographic memory is developing technology that has promised to revolutionalise the storage systems. It can store data upto 1 Tb in a sugar cube sized crystal. Data from more than 1000 CDs can fit into a holographic memory System. Most of the computer hard drives available today can hold only 10 to 40 GB of data, a small fraction of what holographic memory system can hold. Conventional memories use only the surface to store the data. But holographic data storage systems use the volume to store data. It has more advantages than conventional storage systems. It is based on the principle of holography.

Scientist Pieter J. van Heerden first proposed the idea of holographic (three-dimensional) storage in the early 1960s. A decade later, scientists at RCA Laboratories demonstrated the technology by recording 500 holograms in an iron-doped lithium-niobate crystal and 550 holograms of high-resolution images in a light-sensitive polymer material. The lack of cheap parts and the advancement of magnetic and semiconductor memories placed the development of holographic data storage on hold.

IRIS Recognition

Added on: March 12th, 2012 by 1 Comment

Iris recognition is an automated method of capturing a person’s unique biological data that distinguishes him or her from another individual. It has emerged as one of the most powerful and accurate identification techniques in the modern world. It has proven to be most fool proof technique for the identification of individuals without the use of cards, PINs and passwords. It facilitates automatic identification where by electronic transactions or access to places, information or accounts are made easier, quicker and more secure.

A method for rapid visual recognition of personal identity is described, based on the failure of statistical test of independence. The most unique phenotypic feature visible in a person’s face is the detailed texture of each eye’s iris: an estimate of its statistical complexity in a sample of the human population reveals variation corresponding to several hundred independent degrees-of-freedom. Morphogenetic randomness in the texture expressed phenotypically in the iris trabeclar meshwork ensures that a test of statistical independence on two coded patterns organizing from different eyes is passed almost certainly, whereas the same test is failed almost certainly when the compared codes originate from the same eye. The visible texture of a person’s iris in a real time video image is encoded into a compact sequence of multi-scale quadrature 2-D Gabor wavelet coefficients, whose most significant bits comprise a 512 – byte “IRIS–CODE” statistical decision theory generates identification decisions from Exclusive-OR comparisons of complete iris code at the rate of 4,000 per second, including calculation of decision confidence levels. The distributions observed empirically in such comparisons imply a theoretical “cross-over” error rate of one in 1,31,000 when a decision criterion is adopted that would equalize the False Accept and False Reject error rates.