Hello Guest. Sign Up to view and download full seminar reports               

SEMINAR TOPICS CATEGORY

Computer/IT Topics Category

Touch Screen

Added on: March 12th, 2012 by No Comments

A touch screen is computer display screen that is sensitive to human touch, allowing a user to interact with the computer by touching pictures or words on the screen. Touch screen are used with information kiosks (an interactive computer terminal available for public use, as one with internet access or site specific information), computer based training devices, and system designed to help individuals who have difficulty in manipulating a mouse or keyboard. This technology can be used as an alternative user interface with application that normally requires a mouse, such as a web browser. Some applications are designed specifically for touch screen technology, often having larger icon and link than typical PC application. Monitors are available with built in touch screen kit.

A touch screen kit includes a touch screen panel, a controller, and a software driver. These panels are a clear panel attached externally to the monitors that plug in to a serial or a universal serial Bus (USB) port a bus Card installed in side the computer. The touch screen panel registers touch event and passes these signal to controller. The controller then processes the signals and sends the data to the processor. The software driver translates the touch events into mouse events. Driver can be provided for both Window and Macintosh operating systems. Internal touch screen kits are available but require professional installation because the must be installed inside the monitors.

Virtual Retinal Display

Added on: March 12th, 2012 by No Comments

The Virtual Retinal Display (VRD) is a personal display device under development at the University of Washington’s Human Interface Technology Laboratory in Seattle, Washington USA. The VRD scans light directly onto the viewer’s retina. The viewer perceives a wide field of view image. Because the VRD scans light directly on the retina, the VRD is not a screen based technology.

The VRD was invented at the University of Washington in the Human Interface Technology Lab (HIT) in 1991. The development began in November 1993. The aim was to produce a full color, wide field-of-view, high resolution, high brightness, low cost virtual display. Microvision Inc. has the exclusive license to commercialize the VRD technology. This technology has many potential applications, from head-mounted displays (HMDs) for military/aerospace applications to medical society.

The VRD projects a modulated beam of light (from an electronic source) directly onto the retina of the eye producing a rasterized image. The viewer has the illusion of seeing the source image as if he/she stands two feet away in front of a 14-inch monitor. In reality, the image is on the retina of its eye and not on a screen. The quality of the image he/she sees is excellent with stereo view, full color, wide field of view, no flickering characteristics.

Interplanetary Internet

Added on: March 12th, 2012 by No Comments

Increasingly, network applications must communicate with counterparts across disparate networking environments characterized by significantly different sets of physical and operational constraints; wide variations in transmission latency are particularly troublesome. The proposed Interplanetary Internet (IPN), which must encompass both terrestrial and interplanetary links, is an extreme case. An architecture based on a protocol that can operate successfully and reliably in multiple disparate environments would simplify the development and deployment of such applications. The Internet protocols are ill suited for this purpose. They are, in general, poorly suited to operation on paths in which some of the links operate intermittently or over extremely long propagation delays. The principle problem is reliable transport, but the operations of the Internet’s routing protocols would also raise troubling issues.

It is this analysis that leads us to propose architecture based on Internet-independent middleware: use exactly those protocols at all layers that are best suited to operation within each environment, but insert a new overlay network protocol between the applications and the locally optimized stacks. This new protocol layer, called the bundle layer, ties together the region-specific lower layers so that application programs can communicate across multiple regions.
The DTN architecture implements store-and-forward message switching.

A DTN is a network of regional networks, where a regional network is a network that is adapted to a particular communication region, wherein communication characteristics are relatively homogeneous. Thus, DTNs support interoperability of regional networks by accommodating long delays between and within regional networks, and by translating between regional communication characteristics.

Wibree Technology

Added on: March 11th, 2012 by No Comments

Now that wireless connections are established solutions in various sectors of consumer electronics, the question arises whether devices that draw long life from a small battery could find benefit as well in a global standard for wireless low energy technology. Makers of sensors for sports, health and fitness devices have dabbled in wireless but not together, while manufacturers of products like watches have never even considered adding wireless functionality because no options were available. Several wireless technologies have tried to address the needs of the button cell battery market, but most were proprietary and garnered little industry support. However, none of these technologies let smaller manufacturers plug in to a global standard that provides a viable link with devices like mobile phones and laptops.

However, companies that wants to make their small devices wireless need to build and sell either a dedicated display unit or an adapter that connects to a computing platform such as a mobile phone, PC or iPod. There have been few successful products that followed this route to a mass market. A new flavor of Bluetooth technology may be just the answer, and a more efficient alternative for yet another wireless standard.

Self Managing Computing

Added on: March 11th, 2012 by 1 Comment

Self managing computing helps address the complexity issues by using technology to manage technology. The idea is not new many of the major players in the industry have developed and delivered products based on this concept. Self managing computing is also known as autonomic computing.

Autonomic Computing is an initiative started by IBM in 2001. Its ultimate aim is to develop computer systems capable of self-management, to overcome the rapidly growing complexity of computing systems management, and to reduce the barrier that complexity poses to further growth. In other words, autonomic computing refers to the self-managing characteristics of distributed computing resources, adapting to unpredictable changes while hiding intrinsic complexity to operators and users.

The term autonomic is derived from human biology. The autonomic nervous system monitors your heartbeat, checks your blood sugar level and keeps your body temperature close to 98.6°F, without any conscious effort on your part. In much the same way, self managing computing components anticipate computer system needs and resolve problems with minimal human intervention.

Self managing computing systems have the ability to manage themselves and dynamically adapt to change in accordance with business policies and objectives. Self-managing systems can perform management activities based on situations they observe or sense in the IT environment. Rather than IT professionals initiating management activities, the system observes something about itself and acts accordingly. This allows the IT professional to focus on high-value tasks while the technology manages the more mundane operations. Self managing computing can result in a significant improvement in system management efficiency, when the disparate technologies that manage the environment work together to deliver performance results system wide.

However, complete autonomic systems do not yet exist. This is not a proprietary solution. It’s a radical change in the way businesses, academia, and even the government design, develop, manage and maintain computer systems. Self managing computing calls for a whole new area of study and a whole new way of conducting business.

Self managing computing is the self-management of e-business infrastructure, balancing what is managed by the IT professional and what is managed by the system. It is the evolution of e-business.

Virtual Keyboard

Added on: March 11th, 2012 by No Comments

Virtual Keyboard is just another example of today’s computer trend of smaller and faster’. Computing is now not limited to desktops and laptops, it has found its way into mobile devices like palm tops and even cell phones. But what has not changed for the last 50 or so odd years is the input device, the good old QWERTY keyboard. The virtual keyboard technology is the latest development.

The virtual keyboard technology uses sensor technology and artificial intelligence to let users work on any flat surface as if it were a keyboard. Virtual Keyboards lets you easily create multilingual text content on almost any existing platform and output it directly to PDAs or even web pages. Virtual Keyboard, being a small, handy, well-designed and easy to use application, turns into a perfect solution for cross platform text input.
The main features are: platform-independent multilingual support for keyboard text input, built-in language layouts and settings, copy/paste etc. operations support just as in a regular text editor, no change in already existing system language settings, easy and user-friendly interface and design, and small file size.

The report first gives an overview of the QWERTY keyboards and the difficulties arising from using them. It then gives a description about the virtual keyboard technology and the various types of virtual keyboards in use. Finally the advantages, drawbacks and the applications are discussed.

Biocomputers

Added on: March 11th, 2012 by 2 Comments

Biocomputing is one of the upcoming field in the areas of mole-cularelectronics and nanotechnology. The idea behind blending biology with technology is due to the limitations faced by the semiconductor designers in decreasing the size of the silicon chips, which directly affects the processor speed. Biocomputers consists of biochips unlike the normal computers, which are silicon-based computers. This biochip consists of biomaterial such as nucleic acid, enzymes, etc.

The power of a biocomputer is that it acts as a massively parallel computer and has immense data storage capability.Thus, it can be used to solve NP-complete problems with higher efficiency.The possibilities for bio-computers include developing a credit-card size computer that could design a super-efficient global air-traffic control system. The basic idea behind bio-computing is to use molecular reactions for computational purposes.Biocomputing is one of the upcoming field in the areas of mole-cularelectronics and nanotechnology.

The idea behind blending biology with technology is due to the limitations faced by the semiconductor designers in decreasing the size of the silicon chips, which directly affects the processor speed. Biocomputers consists of biochips unlike the normal computers, which are silicon-based computers. This biochip consists of biomaterial such as nucleic acid, enzymes, etc. The power of a biocomputer is that it acts as a massively parallel computer and has immense data storage capability.Thus, it can be used to solve NP-complete problems with higher efficiency.The possibilities for bio-computers include developing a credit-card size computer that could design a super-efficient global air-traffic control system..

Viruses and Worms

Added on: March 11th, 2012 by No Comments

One of the most high pro?e threats to information integrity is the computer virus. In this paper, I am presenting what are viruses, worms, and Trojan horses and their differences, different strategies of virus spreading and case studies of Slammer and Blaster worms.

The internet consists of hundreds of millions of computers distributed around the world. Millions of people use the internet daily, taking full advantage of the available services at both personal and professional levels. The internet connectivity among computers on which the World Wide Web relies, however renders its nodes on easy target for malicious users who attempt to exhaust their resources or damage the data or create a havoc in the network.
Computer Viruses, especially in recent years, have increased dramatically in number. One of the most highpro?le threats to information integrity is the Computer Virus.

Surprisingly, PC viruses have been around for two-thirds of the IBM PC’s lifetime, appearing in 1986. With global computing on the rise, computer viruses have had more visibility in the past few years. In fact, the entertainment industry has helped by illustrating the effects of viruses in movies such as ”Independence Day”, ”The Net”, and ”Sneakers”. Along with computer viruses, computer worms are also increasing day by day. So, there is a need to immunise the internet by creating awareness in the people about these in detail. In this paper I have explained the basic concepts of viruses and worms and how they spread.

The basic organisation of the paper is as follows. In section 2, give some preliminaries: the de?nitions of computer virus, worms, trojan horses, as well as some other malicious programs and also basic characteristics of a virus.

In section 3, detailed description: describe Malicious Code Environments where virus can propagate, Virus/Worm types overview where different types have been explained, and Categories of worm where the different forms of worm is explained in broad sense. In section 4, File Infection Techniques which describe the various methods of infection mechanisms of a virus. In section 5, Steps in Worm Propagation describe the basic steps that a normal worm will follow for propagation.

In section 6 Case studies: two case studies of Slammer worm and blaster worm are discussed.

Stream Control Transmission Protocol

Added on: March 11th, 2012 by No Comments

The Stream Control Transmission Protocol (SCTP) is a new IP transport protocol, existing at an equivalent level as UDP (User Datagram Protocol) and TCP (Transmission Control Protocol), which currently provide transport layer functions to all of the main Internet applications. UDP, RTP, TCP, and SCTP are currently the IETF standards-track transport-layer protocols. Each protocol has a domain of applicability and services it provides, albeit with some overlaps.

Like TCP, SCTP provides a reliable transport service, ensuring that data is transported across the network without error and in sequence. Like TCP, SCTP is a connection-oriented mechanism, meaning that a relationship is created between the endpoints of an SCTP session prior to data being transmitted, and this relationship is maintained until all data transmission has been successfully completed.

Unlike TCP, SCTP provides a number of functions that are considered critical for signaling transport, and which at the same time can provide transport benefits to other applications requiring additional performance and reliability.

By clarifying the situations where the functionality of these protocols is applicable, this document can guide implementers and protocol designers in selecting which protocol to use.

Special attention is given to services SCTP provides which would make a decision to use SCTP the right one.

Sixth Sense Technology

Added on: March 11th, 2012 by No Comments

Although miniaturized versions of computers help us to connect to the digital world even while we are travelling there aren’t any device as of now which gives a direct link between the digital world and our physical interaction with the real world. Usually the information’s are stored traditionally on a paper or a digital storage device. Sixth sense technology helps to bridge this gap between tangible and non-tangible world. Sixth Sense device is basically a wearable gestural interface that connects the physical world around us with digital information and lets us use natural hand gestures to interact with this information .The sixth sense technology was developed by Pranav Mistry, a PhD student in the Fluid Interfaces Group at the MIT Media Lab. The sixth sense technology has a Web 4.0 view of human and machine interactions. Sixth Sense integrates digital information into the physical world and its objects, making the entire world your computer. It can turn any surface into a touch-screen for computing, controlled by simple hand gestures. It is not a technology which is aimed at changing human habits but causing computers and other machines to adapt to human needs. It also supports multi user and multi touch provisions. Sixth Sense device is a mini-projector coupled with a camera and a cell phone—which acts as the computer and your connection to the Cloud, all the information stored on the web. The current prototype costs around $350. The Sixth Sense prototype is used to implement several applications that have shown the usefulness, viability and flexibility of the system.

Agent Based Computing

Added on: March 11th, 2012 by No Comments

Agent-based computing represents an exciting new synthesis for both Artificial Intelligence and more generally, Computer Science. It has the potential to improve the theory and the practice of modeling, designing and implementing complex computer systems. Yet, to date, there has been little systematic analysis of what makes the agent-based approach such an appealing and powerful computational model. To rectify this situation, this paper aims to tackle exactly this issue. The standpoint of this analysis is the role of agent-based software in solving complex, real world problems. In particular, it will be argued that the development of robust and scalable software systems requires autonomous agents that can complete their objectives while situated in a dynamic and uncertain environment, that can engage in rich, high-level interactions, and that can operate within flexible organizational structures.

Keywords: autonomous agents, agent-oriented software engineering, complex systems

3D-Doctor

Added on: March 10th, 2012 by 1 Comment

3D-DOCTOR Software is used to extract information from image files to create 3D model. It was developed using object-oriented technology and provides efficient tools to process and analyze 3D images, object boundaries, 3D models and other associated data items in an easy-to-use environment. It does 3D image segmentation, 3D surface modeling, rendering, volume rendering, 3D image processing, disconsolation, registration, automatic alignment, measurements, and many other functions. Software supports both grayscale and color images stored in DICOM, TIFF, Interfile, GIF, JPEG, PNG, BMP, PGM, RAW or other image file formats. 3D-DOCTOR creates 3D surface models and volume rendering from 2D cross-section images in real time on your PC. Leading hospitals, medical schools and research organizations around the world are currently using 3D-DOCTOR.

Medical Image Fusion

Added on: March 10th, 2012 by No Comments

Image fusion is the process by which two or more images are combined into a single image retaining the important features from each of the original images. The fusion of images is often required for images acquired from different instrument modalities or capture techniques of the same scene or objects. Important applications of the fusion of images include medical imaging, microscopic imaging, remote sensing, computer vision, and robotics. Fusion techniques include the simplest method of pixel averaging to more complicated methods such as principal component analysis and wavelet transform fusion. Several approaches to image fusion can be distinguished, depending on whether the images are fused in the spatial domain or they are transformed into another domain, and their transforms fused.

With the development of new imaging sensors arises the need of a meaningful combination of all employed imaging sources. The actual fusion process can take place at different levels of information representation, a generic categorization is to consider the different levels as, sorted in ascending order of abstraction: signal, pixel, feature and symbolic level. This focuses on the so-called pixel level fusion process, where a composite image has to be built of several input images. To date, the result of pixel level image fusion is considered primarily to be presented to the human observer, especially in image sequence fusion (where the input data consists of image sequences). A possible application is the fusion of forward looking infrared (FLIR) and low light visible images (LLTV) obtained by an airborne sensor platform to aid a pilot navigate in poor weather conditions or darkness. In pixel-level image fusion, some generic requirements can be imposed on the fusion result. The fusion process should preserve all relevant information of the input imagery in the composite image (pattern conservation) The fusion scheme should not introduce any artifacts or inconsistencies which would distract the human observer or following processing stages .The fusion process should be shift and rotational invariant, i.e. the fusion result should not depend on the location or orientation of an object the input imagery .In case of image sequence fusion arises the additional problem of temporal stability and consistency of the fused image sequence. The human visual system is primarily sensitive to moving light stimuli, so moving artifacts or time depended contrast changes introduced by the fusion process are highly distracting to the human observer. So, in case of image sequence fusion the two additional requirements apply. Temporal stability: The fused image sequence should be temporal stable, i.e. gray level changes in the fused sequence must only be caused by gray level changes in the input sequences, they must not be introduced by the fusion scheme itself; Temporal consistency: Gray level changes occurring in the input sequences must be present in the fused sequence without any delay or contrast change.

Cellular Neural Networks

Added on: March 10th, 2012 by No Comments

Cellular Neural Networks (CNN) a revolutionary concept and an experimentally proven computing paradigm for analog computers. A standard CNN architecture consists of an m*n rectangular array of cells c(i,j) with Cartesian co-ordinates. Considering inputs and outputs of a cell as binary arguments. It can realize Boolean functions. using this technology, analog computers mimic anatomy & physiology of many sensory& processing organs with stored programmability. this has been called “sensor revolution” with cheap sensors &mems arrays in desired forms of artificial eyes, ears, nose etc. Such a computer is capable of computing 3 trillion equivalent digital operations/sec, a performance that can be only matched by super computers. CNN chips are mainly used in processing brain-like tasks due to its unique architecture which are non-numeric &spatio temporal in nature and will require no more than accuracy of common neurons.

WiMAX

Added on: March 9th, 2012 by No Comments

In recent years, Broadband technology has rapidly become an established, global commodity required by a high percentage of the population. The demand has risen rapidly, with a worldwide installed base of 57 million lines in 2002 rising to an estimated 80 million lines by the end of 2003. This healthy growth curve is expected to continue steadily over the next few years and reach the 200 million mark by 2006. DSL operators, who initially focused their deployments in densely-populated urban and metropolitan areas, are now challenged to provide broadband services in suburban and rural areas where new markets are quickly taking root. Governments are prioritizing broadband as a key political objective for all citizens to overcome the “broadband gap” also known as “digital divide”.

Wireless DSL (WDSL) offers an effective, complementary solution to wireline DSL, allowing DSL operators to provide broadband service to additional areas and populations that would otherwise find themselves outside the broadband loop. Government regulatory bodies are realizing the inherent worth in wireless technologies as a means for solving digital-divide challenges in the last mile and have accordingly initiated a deregulation process in recent years for both licensed and unlicensed bands to support this application. Recent technological advancements and the formation of a global standard and interoperability forum – WiMAX, set the stage for WDSL to take a significant role in the broadband market. Revenues from services delivered via Broadband Wireless Access have already reached $323 million and are expected to jump to $1.75 billion.

Biometric Systems

Added on: March 9th, 2012 by No Comments

A biometric is defined as a unique, measurable, biological characteristic or trait for automatically recognizing or verifying the identity of a human being. Statistically analyzing these biological characteristics has become known as the science of biometrics. These days, biometric technologies are typically used to analyze human characteristics for security purposes. Five of the most common physical biometric patterns analyzed for security purposes are the fingerprint, hand, eye, face, and voice. The use of biometric characteristics as a means of identification. In this paper we will give a brief overview of the field of biometrics and summarize some of its advantages, disadvantages, strengths, limitations, and related privacy concerns. We will also look at how this process has been refined over time and how it currently works.

DNA Computing

Added on: March 9th, 2012 by No Comments

Molecular biologists are beginning to unravel the information-processing tools such as enzymes that evolution has spent billions of years refining. These tools are now been taken in large numbers of DNA molecules and using them as biological computer processors.

Dr. Leonard Adleman, a well-known scientist, found a way to exploit the speed and efficiency of the biological reactions to solve the “Hamiltonian path problem”, also known as the “traveling salesman problem”.

Based on Dr. Adleman’s experiment, we will explain DNA computing, its algorithms, how to manage DNA based computing and the advantages and disadvantages of DNA computing.

DNA Computing in Security

Added on: March 9th, 2012 by No Comments

As modern encryption algorithms are broken, the world of information security looks in new directions to protect the data it transmits. The concept of using DNA computing in the fields of cryptography and steganography has been identified as a possible technology that may bring forward a new hope for unbreakable algorithms. Is the fledgling field of DNA computing the next cornerstone in the world of information security or is our time better spent following other paths for our data encryption algorithms of the future?

This paper will outline some of the basics of DNA and DNA computing and its use in the areas of cryptography, steganography and authentication.

Research has been performed in both cryptographic and steganographic situations with respect to DNA computing. The constraints of its high tech lab requirements and computational limitations combined with the labour intensive extrapolation means, illustrate that the field of DNA computing is far from any kind of efficient use in today’s security world. DNA authentication on the other hand, has exhibited great promise with real world examples already surfacing on the marketplace today.

Symbian OS

Added on: March 8th, 2012 by No Comments

Symbian OS is the operating system licensed by the world’s leading mobile phone manufacturers. Symbian OS is designed for the specific requirements of open, data-enabled 2G, 2.5G and 3G mobile phone.Key features of symbian,how symbian supports modern features of mobile phones are discussed briefly.

The Symbian platform was created by merging and integrating software assets contributed by Nokia, NTT DoCoMo, Sony Ericsson and Symbian Ltd., including Symbian OS assets at its core, the S60 platform, and parts of the UIQ and MOAP(S) user interfaces.

Symbian is a mobile operating system (OS) and computing platform designed for smartphones and currently maintained by Accenture.[7] The Symbian platform is the successor to Symbian OS and Nokia Series 60; unlike Symbian OS, which needed an additional user interface system, Symbian includes a user interface component based on S60 5th Edition. The latest version, Symbian^3, was officially released in Q4 2010, first used in the Nokia N8. In May 2011 an update, Symbian Anna, was officially announced, followed by Nokia Belle (previously Symbian Belle) in August 2011.[8][9]
Symbian OS was originally developed by Symbian Ltd.[10] It is a descendant of Psion’s EPOC and runs exclusively on ARM processors, although an unreleased x86 port existed.

Grid Computing

Added on: March 8th, 2012 by No Comments

The Grid has the potential to fundamentally change the way science and engineering are done. Aggregate power of computing resources connected by networks—of the Grid— exceeds that of any single supercomputer by many orders of magnitude. At the same time, our ability to carry out computations of the scale and level of detail required, for example, to study the Universe, or simulate a rocket engine, are severely constrained by available computing power. Hence, such applications should be one of the main driving forces behind the development of Grid computing.
Grid computing is emerging as a new environment for solving difficult problems. Linear and nonlinear optimization problems can be computationally expensive. The resource access and management is one of the most important key factors for grid computing. It requires a mechanism with automatically making decisions ability to support computing tasks collaborating and scheduling.

Grid computing is an active research area which promises to provide a flexible infrastructure for complex, dynamic and distributed resource sharing and sophisticated problem solving environments. The Grid is not only a low level infrastructure for supporting computation, but can also facilitate and enable information and knowledge sharing at the higher semantic levels, to support knowledge integration and dissemination.

Light Tree

Added on: March 8th, 2012 by No Comments

The concept of a light-tree is introduced in a wavelength-routed optical network. A light-tree is a point-to-multipoint generalization of a lightpath. A lightpath is a point-to-point all-optical wavelength channel connecting a transmitter at a source node to a receiver at a destination node. Lightpath communication can significantly reduce the number of hops (or lightpaths) a packet has to traverse; and this reduction can, in turn, significantly improve the network’s throughput. We extend the lightpath concept by incorporating an optical multicasting capability at the routing nodes in order to increase the logical connectivity of the network and further decrease its hop distance. We refer to such a point-to-multipoint extension as a light-tree. Light-trees cannot only provide improved performance for unicast traffic, but they naturally can better support multicast traffic and broadcast traffic. In this study, we shall concentrate on the application and advantages of light-trees to unicast and broadcast traffic. We formulate the light-tree-based virtual topology design problem as an optimization problem with one of two possible objective functions: for a given traffic matrix,

(i) Minimize the network-wide average packet hop distance, or,
(ii) Minimize the total number of transceivers in the network. We demonstrate that an optimum light-tree-based virtual topology has clear advantages over an optimum light path-based virtual topology with respect to the above two objectives.

Utility Fog (Nanofog)

Added on: March 7th, 2012 by No Comments

Nanotechnology is based on the concept of tiny, self – replicating robots. The Utility Fog is a very simple extension of this idea. Utility Fog is highly advanced nanotechnology which the Technocratic Union has developed as the ultimate multi-purpose tool. It is a user friendly, completely programmable collection of nanomachines that can form a vast range of machinery, from office pins to space ships. It can simulate any material from gas, liquid and solid and it can even be used in sufficient quantities to implement the ultimate in virtual reality.

With the right programming, the robots can exert any force in any direction on the surface of any object. They can support the object so that it apparently floats in air. They can support a person applying the same pressure that a chair would. A programme running in Utility Fog can thus simulate the physical existence of any object.

Utility Fog should be capable of simulating most everyday materials, dynamically changing its form and forming a substrate for an integrated virtual reality. This paper will examine the basic concept, and explore some of the applications of this material.

Cell Phone Virus and Security

Added on: March 7th, 2012 by No Comments

Rapid advances in low-power computing, communications, and storage technologies continue to broaden the horizons of mobile devices, such as cell phones and personal digital assistants (PDAs). As the use of these devices extends into applications that srequire them to capture, store, access, or communicate sensitive data, (e.g., mobile ecommerce,financial transactions, acquisition and playback of copyrighted content, etc.) security becomes an immediate concern. Left unaddressed, security concerns threaten to impede the deployment of new applications and value-added services, which is an important engine of growth for the wireless, mobile appliance and semiconductor industries. According to a survey of mobile appliance users, 52% cited security concerns as the biggest impediment to their adoption of mobile commerce.

A cell-phone virus is basically the same thing as a computer virus — an unwanted executable file that “infects” a device and then copies itself to other devices. But whereas a computer virus or worm spreads through e-mail attachments and Internet downloads, a cell-phone virus or worm spreads via Internet downloads, MMS (multimedia messaging service) attachments and Bluetooth transfers. The most common type of cell-phone infection right now occurs when a cell phone downloads an infected file from a PC or the Internet, but phone-to-phone viruses are on the rise.
Current phone-to-phone viruses almost exclusively infect phones running the Symbian operating system. The large number of proprietary operating systems in the cell-phone world is one of the obstacles to mass infection. Cell-phone-virus writers have no Windows-level marketshare to target, so any virus will only affect a small percentage of phones.

Infected files usually show up disguised as applications like games, security patches, add-on functionalities and free stuff. Infected text messages sometimes steal the subject line from a message you’ve received from a friend, which of course increases the likelihood of your opening it — but opening the message isn’t enough to get infected. You have to choose to open the message attachment and agree to install the program, which is another obstacle to mass infection: To date, no reported phone-to-phone virus auto-installs. The installation obstacles and the methods of spreadinglimit the amount of damage the current generation of cell-phone virus can do.

Standard operating systems and Bluetooth technology will be a trend for future cell phone features. These will enable cellphone viruses to spread either through SMS or by sending Bluetooth requests when cellphones are physically close enough. The difference in spreading methods gives these two types of viruses’ different epidemiological characteristics. SMS viruses’ spread is mainly based on people’s social connections, whereas the spreading of Bluetooth viruses is affected by people’s mobility patterns and population distribution. Using cellphone data recording calls, SMS and locations of more than 6 million users, we study the spread of SMS and Bluetooth viruses and characterize how the social network and the mobility of mobile phone users affect such spreading processes.

Smart Note Taker

Added on: March 4th, 2012 by No Comments

The Smart NoteTaker is such a helpful product that satisfies the needs of the people in today’s technologic and fast life. This product can be used in many ways. The Smart NoteTaker provides taking fast and easy notes to people who are busy one’s self with something. With the help of Smart NoteTaker, people will be able to write notes on the air, while being busy with their work. The written note will be stored on the memory chip of the pen, and will be able to read in digital medium after the job has done. This will save time and facilitate life.

The Smart NoteTaker is good and helpful for blinds that think and write freely. Another place, where our product can play an important role, is where two people talks on the phone. The subscribers are apart from each other while their talk, and they may want to use figures or texts to understand themselves better. It’s also useful especially for instructors in presentations. The instructors may not want to present the lecture in front of the board. The drawn figure can be processed and directly sent to the server computer in the room. The server computer then can broadcast the drawn shape through network to all of the computers which are present in the room. By this way, the lectures are aimed to be more efficient and fun. This product will be simple but powerful. The product will be able to sense 3D shapes and motions that user tries to draw. The sensed information will be processed and transferred to the memory chip and then will be monitored on the display device. The drawn shape then can be broadcasted to the network or sent to a mobile device.

There will be an additional feature of the product which will monitor the notes, which were taken before, on the application program used in the computer. This application program can be a word document or an image file. Then, the sensed figures that were drawn onto the air will be recognized and by the help of the software program we will write, the desired character will be printed in the word document. If the application program is a paint related program, then the most similar shape will be chosen by the program and then will be printed on the screen.

Since, JAVA Applet is suitable for both the drawings and strings, all these applications can be put together by developing a single JAVA program. The JAVA code that we will develop will also be installed on the pen so that the processor inside the pen will type and draw the desired shape or text on the display panel.

Speech Recognition

Added on: March 4th, 2012 by No Comments

Language is human beings most important means of communication and speech its primary medium. Speech provides an international forum for communication among researchers in the disciplines that contribute to our understanding of the production, perception, processing, learning and use. Spoken interaction both between human interlocutors and between humans and machines is inescapably embedded in the laws and conditions of Communication, which comprise the encoding and decoding of meaning as well as the mere transmission of messages over an acoustical channel. Here we deal with this interaction between the man and machine through synthesis and recognition applications.
The paper dwells on the speech technology and conversion of speech into analog and digital waveforms which is understood by the machines.

Speech recognition, or speech-to-text, involves capturing and digitizing the sound waves, converting them to basic language units or phonemes, constructing words from phonemes, and contextually analyzing the words to ensure correct spelling for words that sound alike. Speech Recognition is the ability of a computer to recognize general, naturally flowing utterances from a wide variety of users. It recognizes the caller’s answers to move along the flow of the call.
We have emphasized on the modeling of speech units and grammar on the basis of Hidden Markov Model. Speech Recognition allows you to provide input to an application with your voice. The applications and limitations on this subject has enlightened us upon the impact of speech processing in our modern technical field.
While there is still much room for improvement, current speech recognition systems have remarkable performance. We are only humans, but as we develop this technology and build remarkable changes we attain certain achievements. Rather than asking what is still deficient, we ask instead what should be done to make it efficient….

Intrusion Detection Systems

Added on: March 4th, 2012 by No Comments

An intrusion is an active sequence of related events that deliberately try to cause harm, such as rendering a system unusable, accessing unauthorized information or manipulating such information. To record the information about both successful and unsuccessful attempts, the security professionals place the devices that examine the network traffic, called sensors. These sensors are kept in both front of the firewall (the unprotected area) and behind the firewall (the protected area) and values through comparing the information recorded by the two.

An Intrusion Detection Systems(IDS) can be defined as the tool, methods and resources to help identity, access and report unauthorized activity. Intrusion Detection is typically one part of an overall protection system that is installed around a system or device. IDS work at the network layer of the OSI model and sensors are placed at the choke points on the network. They analyze packets to find specific patterns in the network traffic- if they find such a pattern in the traffic, an alert is logged and a response can be based on data recorded

Neuro Chips

Added on: March 3rd, 2012 by No Comments

Until recently, neurobiologists have used computers for simulation, data collection, and data analysis, but not to interact directly with nerve tissue in live, behaving animals. Although digital computers and nerve tissue both use voltage waveforms to transmit and process information, engineers and neurobiologists have yet to cohesively link the electronic signaling of digital computers with the electronic signaling of nerve tissue in freely behaving animals.

Recent advances in microelectromechanical systems (MEMS), CMOS electronics, and embedded computer systems will finally let us link computer circuitry to neural cells in live animals and, in particular, to reidentifiable cells with specific, known neural functions. The key components of such a brain-computer system include neural probes, analog electronics, and a miniature microcomputer. Researchers developing neural probes such as sub- micron MEMS probes, microclamps, microprobe arrays, and similar structures can now penetrate and make electrical contact with nerve cells with out causing significant or long-term damage to probes or cells.

Researchers developing analog electronics such as low-power amplifiers and analog-to-digital converters can now integrate these devices with micro- controllers on a single low-power CMOS die. Further, researchers developing embedded computer systems can now incorporate all the core circuitry of a modern computer on a single silicon chip that can run on miniscule power from a tiny watch battery. In short, engineers have all the pieces they need to build truly autonomous implantable computer systems.

Until now, high signal-to-noise recording as well as digital processing of real-time neuronal signals has been possible only in constrained laboratory experiments. By combining MEMS probes with analog electronics and modern CMOS computing into self-contained, implantable Microsystems, implantable computers will free neuroscientists from the lab bench.

Wavelet Video Processing Technology

Added on: March 2nd, 2012 by 1 Comment

The biggest obstacle to the multimedia revolution is digital obesity. This is the blot that occurs when pictures, sound and video are converted from their natural analog form into computer language for manipulation or transmission. In the present explosion of high quality data, the need to compress it with less distortion of data is the need of the hour. Compression lowers the cost of storage and transmission by packing data into a smaller space.

One of the hottest areas of advanced form of compression is wavelet compression. Wavelet Video Processing Technology offers some alluring features, including high compression ratios and eye pleasing enlargements.

Phishing

Added on: February 28th, 2012 by 1 Comment

In the field of computer security, phishing is the criminally fraudulent process of attempting to acquire sensitive information such as usernames, passwords and credit card details, by masquerading as a trustworthy entity in an electronic attempting to acquire sensitive information such as usernames, passwords and credit card details, by masquerading as a trustworthy entity in an electronic communication. Phishing is a fraudulent e-mail that attempts to get you to divulge personal data that can then be used for illegitimate purposes.

There are many variations on this scheme. It is possible to Phish for other information in additions to usernames and passwords such as credit card numbers, bank account numbers, social security numbers and mothers’ maiden names. Phishing presents direct risks through the use of stolen credentials and indirect risk to institutions that conduct business on line through erosion of customer confidence. The damage caused by phishing ranges from denial of access to e-mail to substantial financial loss.

This report also concerned with anti-phishing techniques. There are several different techniques to combat phishing, including legislation and technology created specifically to protect against phishing. No single technology will completely stop phishing. However a combination of good organization and practice, proper application of current technologies and improvements in security technology has the potential to drastically reduce the prevalence of phishing and the losses suffered from it. Anti-phishing software and computer programs are designed to prevent the occurrence of phishing and trespassing on confidential information. Anti-phishing software is designed to track websites and monitor activity; any suspicious behavior can be automatically reported and even reviewed as a report after a period of time.
This also includes detecting phishing attacks, how to prevent and avoid being scammed, how to react when you suspect or reveal a phishing attack and what you can do to help stop phishers.

Dense Wavelength Division Multiplexing

Added on: February 28th, 2012 by No Comments

There has always been a technological talent to fulfill the constant need to extent the capacity of communication channel and DWDM (Dense Wavelength Division Multiplexing) has dramatically brought about an explosive enlargement of the capacity of fiber network, solving the problem of increasing traffic demand most economically.

DWDM is a technique that makes possible transmission of multiple discrete wavelengths carrying data rate as high as fiber plant allows over a single fiber unidirectionally or bidirectionally.

It is an advanced type of WDM in which the optical channels are more closely spaced than WDM.

3D Password

Added on: February 28th, 2012 by Afsal Meerankutty 3 Comments

Normally the authentication scheme the user undergoes is particularly very lenient or very strict. Throughout the years authentication has been a very interesting approach. With all the means of technology developing, it can be very easy for ‘others’ to fabricate or to steal identity or to hack someone’s password. Therefore many algorithms have come up each with an interesting approach toward calculation of a secret key. The algorithms are such based to pick a random number in the range of 10^6 and therefore the possibilities of the sane number coming is rare.

Users nowadays are provided with major password stereotypes such as textual passwords, biometric scanning, tokens or cards (such as an ATM) etc .Mostly textual passwords follow an encryption algorithm as mentioned above. Biometric scanning is your “natural” signature and Cards or Tokens prove your validity. But some people hate the fact to carry around their cards, some refuse to undergo strong IR exposure to their retinas(Biometric scanning).Mostly textual passwords, nowadays, are kept very simple say a word from the dictionary or their pet names, girlfriends etc. Years back Klein performed such tests and he could crack 10-15 passwords per day. Now with the technology change, fast processors and many tools on the Internet this has become a Child’s Play.

Therefore we present our idea, the 3D passwords which are more customizable and very interesting way of authentication. Now the passwords are based on the fact of Human memory. Generally simple passwords are set so as to quickly recall them. The human memory, in our scheme has to undergo the facts of Recognition, Recalling, Biometrics or Token based authentication. Once implemented and you log in to a secure site, the 3D password GUI opens up. This is an additional textual password which the user can simply put. Once he goes through the first authentication, a 3D virtual room will open on the screen. In our case, let’s say a virtual garage. Now in a day to day garage one will find all sorts of tools, equipments, etc.each of them having unique properties. The user will then interact with these properties accordingly. Each object in the 3D space, can be moved around in an (x,y,z) plane. That’s the moving attribute of each object. This property is common to all the objects in the space. Suppose a user logs in and enters the garage. He sees and picks a screw-driver (initial position in xyz coordinates (5, 5, 5)) and moves it 5 places to his right (in XY plane i.e. (10, 5, 5).That can be identified as an authentication. Only the true user understands and recognizes the object which he has to choose among many. This is the Recall and Recognition part of human memory coming into play. Interestingly, a password can be set as approaching a radio and setting its frequency to number only the user knows. Security can be enhanced by the fact of including Cards and Biometric scanner as input. There can be levels of authentication a user can undergo.

Hyper Transport Technology

Added on: February 28th, 2012 by No Comments

Hyper Transport technology is a very fast, low latency, point-to-point link used for inter-connecting integrated circuits on board. Hyper Transport, previously codenamed as Lightning Data Transport (LDT), provides the bandwidth and flexibility critical for today’s networking and computing platforms while retaining the fundamental programming model of PCI. Hyper Transport was invented by AMD and perfected with the help of several partners throughout the industry.

Hyper Transport was designed to support both CPU-to-CPU communications as well as CPU-to-I/O transfers, thus, it features very low latency. It provides up to 22.4 Gigabyte/second aggregate CPU to I/O or CPU to CPU bandwidth in a highly efficient chip-to-chip technology that replaces existing complex multi-level buses .Using enhanced 1.2 volt LVDS signaling reduces signal noise, using non-multiplexed lines cuts down on signal activity and using dual-data rate clocks lowers clock rates while increasing data throughput. . It employs a packet-based data protocol to eliminate many sideband (control and command) signals and supports asymmetric, variable width data paths.

New specifications are backward compatible with previous generations of specification, extending the investment made in one generation of Hyper Transport-enabled device to future generations. Hyper Transport devices are PCI software compatible, thus they require little or no software overhead. The technology targets networking, telecommunications, computers and embedded systems and any application where high speed, low latency and scalability are necessary.

Darknet

Added on: February 28th, 2012 by No Comments

This paper outlines a migration path towards universal broadband connectivity, motivated by the design of a wireless store-and-forward communications network.

We argue that the cost of real-time, circuit-switched communications is sufficiently high that it may not be the appropriate starting point for rural connectivity. Based on market data for information and communication technology (ICT) services in rural India, we propose a combination of wireless technology with an asynchronous mode of communications to offer a means of introducing ICTs with:

  • affordability and practicality for end users
  • a sustainable cost structure for operators and investors
  • a smooth migration path to universal broadband

connectivity.

A summary of results and data are given for an operational pilot test of this wireless network in Karnataka, India, beginning in March 2003.
We also briefly discuss the economics and policy considerations for deploying this type of network in the context of rural connectivity.

4G Wireless Technology

Added on: February 27th, 2012 by 1 Comment

Pick up any newspaper today and it is a safe bet that you will find an article somewhere relating to mobile communications. If it is not in the technology section it will almost certainly be in the business section and relate to the increasing share prices of operators or equipment manufacturers, or acquisitions and take-overs thereof. Such is the pervasiveness of mobile communications that it is affecting virtually everyone’s life and has become a major political topic and a significant contributor to national gross domestic product (GDP).

The major driver to change in the mobile area in the last ten years has been the massive enabling implications of digital technology, both in digital signal processing and in service provision. The equivalent driver now, and in the next five years, will be the all pervasiveness of software in both networks and terminals. The digital revolution is well underway and we stand at the doorway to the software revolution. Accompanying these changes are societal developments involving the extensions in the use of mobiles. Starting out from speech-dominated services we are now experiencing massive growth in applications involving SMS (Short Message Service) together with the start of Internet applications using WAP (Wireless Application Protocol) and i-mode. The mobile phone has not only followed the watch, the calculator and the organiser as an essential personal accessory but has subsumed all of them. With the new Internet extensions it will also lead to a convergence of the PC, hi-fl and television and provide mobility to facilities previously only available on one network.

Optical Computers

Added on: February 26th, 2012 by No Comments

Computers have enhanced human life to a great extent.The goal of improving on computer speed has resulted in the development of the Very Large Scale Integration (VLSI) technology with smaller device dimensions and greater complexity.

VLSI technology has revolutionized the electronics industry and additionally, our daily lives demand solutions to increasingly sophisticated and complex problems, which requires more speed and better performance of computers.

For these reasons, it is unfortunate that VLSI technology is approaching its fundamental limits in the sub-micron miniaturization process. It is now possible to fit up to 300 million transistors on a single silicon chip. As per the Moore’¬s law it is also estimated that the number of transistor switches that can be put onto a chip doubles every 18 months. Further miniaturization of lithography introduces several problems such as dielectric breakdown, hot carriers, and short channel effects. All of these factors combine to seriously degrade device reliability. Even if developing technology succeeded in temporarily overcoming these physical problems, we will continue to face them as long as increasing demands for higher integration continues. Therefore, a dramatic solution to the problem is needed, and unless we gear our thoughts toward a totally different pathway, we will not be able to further improve our computer performance for the future.

Optical interconnections and optical integrated circuits will provide a way out of these limitations to computational speed and complexity inherent in conventional electronics. Optical computers will use photons traveling on optical fibers or thin films instead of electrons to perform the appropriate functions. In the optical computer of the future, electronic circuits and wires will be replaced by a few optical fibers and films, making the systems more efficient with no interference, more cost effective, lighter and more compact. Optical components would not need to have insulators as those needed between electronic components because they don’t experience cross talk. Indeed, multiple frequencies (or different colors) of light can travel through optical components without interfacing with each others, allowing photonic devices to process multiple streams of data simultaneously.

Biochips

Added on: February 26th, 2012 by No Comments

Biochips were invented 9 years ago by gene scientist Stephen Fodor. In a flash of light he saw that photolithography, the process used to etch semi conductor circuits in to silicon could also be used to assemble particular DNA molecules on a chip.

The human body is the next biggest target of chip makers. medical researchers have been working since a long period to integrate humans body and chips . In no time or at maximum within a short period of time Biochips can get implanted into the body of humans . So integration of humans and chips is achieved this way .

Money and research has already gone into this area of technology .Anyway such implants are already being experimented with animals . A simple chip is being is being implanted into tens of thousands of animals especially pets.

Rapid Prototyping

Added on: February 26th, 2012 by No Comments

The term rapid prototyping (RP) refers to a class of technologies that can automatically construct physical models from Computer-Aided Design (CAD) data. It is also called Desktop Manufacturing or Freeform Fabrication. These technologies enable us to make even complex prototypes that act as an excellent visual aid to communicate with co-workers and customers. These prototypes are also used for design testing.

Why Rapid Prototyping?

  • Objects can be formed with any geometric complexity or intricacy without the need for elaborate machine set-up or final assembly.
  • Freeform fabrication systems reduce the construction of complex objects to a manageable, straightforward, and relatively fast process.
  • These techniques are currently being advanced further to such an extend that they can be used for low volume economical production of parts.
  • It significantly cut costs as well as development times.

Crusoe Processor

Added on: February 26th, 2012 by No Comments

Mobile computing has been the buzzword for quite a long time. Mobile computing devices like laptops, notebook PCs etc are becoming common nowadays. The heart of every PC whether a desktop or mobile PC is the microprocessor. Several microprocessors are available in the market for desktop PCs from companies like Intel, AMD, Cyrix etc. The mobile computing market has never had a microprocessor specifically designed for it. The microprocessors used in mobile PCs are optimized versions of the desktop PC microprocessor.

Mobile computing makes very different demands on processors than desktop computing. Those desktop PC processors consume lots of power, and they get very hot. When you’re on the go, a power-hungry processor means you have to pay a price: run out of power before you’ve finished, or run through the airport with pounds of extra batteries. A hot processor also needs fans to cool it, making the resulting mobile computer bigger, clunkier and noisier. The market will still reject a newly designed microprocessor with low power consumption if the performance is poor. So any attempt in this regard must have a proper ‘performance-power’ balance to ensure commercial success. A newly designed microprocessor must be fully x86 compatible that is they should run x86 applications just like conventional x86 microprocessors since most of the presently available software has been designed to work on x86 platform.

Crusoe is the new microprocessor, which has been designed specially for the mobile computing market .It has been, designed after considering the above-mentioned constraints. A small Silicon Valley startup company called Transmeta Corp developed this microprocessor.

The concept of Crusoe is well understood from the simple sketch of the processor architecture, called ‘amoeba’. In this concept, the x86 architecture is an ill-defined amoeba containing features like segmentation, ASCII arithmetic, variable-length instructions etc. Thus Crusoe was conceptualized as a hybrid microprocessor, i.e. it has a software part and a hardware part with the software layer surrounding the hardware unit. The role of software is to act as an emulator to translate x86 binaries into native code at run time. Crusoe is a 128-bit microprocessor fabricated using the CMOS process. The chip’s design is based on a technique called VLIW to ensure design simplicity and high performance. The other two technologies using are Code Morphing Software and LongRun Power Management. The crusoe hardware can be changed radically without affecting legacy x86 software: For the initial Transmeta products, models TM3120 and TM5400, the hardware designers opted for minimal space and power.

Blue Gene

Added on: February 24th, 2012 by No Comments

Blue Gene/L (BG/L) is a 64K (65,536) node scientific and engineering supercomputer that IBM is developing with partial funding from the United States Department of Energy. This paper describes one of the primary BG/L interconnection networks, a three dimensional torus. We describe a parallel performance simulator that was used extensively to help architect and design the torus network and present sample simulator performance studies that contributed to design decisions. In addition to such studies, the simulator was also used during the logic verification phase of BG/L for performance verification, and its use there uncovered a bug in the VHDL implementation of one
of the arbiters. Blue Gene/L (BG/L) is a scientific and engineering, message-passing, supercomputer that IBM is developing with partial funding from the U.S. Department of Energy Lawrence Livermore National Laboratory. A 64K node system is scheduled to be delivered to Livermore, while a 20K node system will be installed at the IBM T.J. Watson Research Center for use in life sciences computing, primarily protein folding. A more complete overview of BG/L may be found in [1], but we briefly describe the primary features of the machine.

Image Authentication Techniques

Added on: February 24th, 2012 by No Comments

This paper explores the various techniques used to authenticate the visual data recorded by the automatic video surveillance system. Automatic video surveillance systems are used for continuous and effective monitoring and reliable control of remote and dangerous sites. Some practical issues must be taken in to account, in order to take full advantage of the potentiality of VS system. The validity of visual data acquired, processed and possibly stored by the VS system, as a proof in front of a court of law is one of such issues.

But visual data can be modified using sophisticated processing tools without leaving any visible trace of the modification. So digital or image data have no value as legal proof, since doubt would always exist that they had been intentionally tampered with to incriminate or exculpate the defendant. Besides, the video data can be created artificially by computerized techniques such as morphing. Therefore the true origin of the data must be indicated to use them as legal proof. By data authentication we mean here a procedure capable of ensuring that data have not been tampered with and of indicating their true origin.

Access Premium Seminar Reports: Subscribe Now



Sign Up for comprehensive seminar reports & presentations: DOCX, PDF, PPTs.