Hello Guest. Sign Up to view and download full seminar reports               


Engineering Topics Category

Touch Screen

Added on: March 12th, 2012 by No Comments

A touch screen is computer display screen that is sensitive to human touch, allowing a user to interact with the computer by touching pictures or words on the screen. Touch screen are used with information kiosks (an interactive computer terminal available for public use, as one with internet access or site specific information), computer based training devices, and system designed to help individuals who have difficulty in manipulating a mouse or keyboard. This technology can be used as an alternative user interface with application that normally requires a mouse, such as a web browser. Some applications are designed specifically for touch screen technology, often having larger icon and link than typical PC application. Monitors are available with built in touch screen kit.

A touch screen kit includes a touch screen panel, a controller, and a software driver. These panels are a clear panel attached externally to the monitors that plug in to a serial or a universal serial Bus (USB) port a bus Card installed in side the computer. The touch screen panel registers touch event and passes these signal to controller. The controller then processes the signals and sends the data to the processor. The software driver translates the touch events into mouse events. Driver can be provided for both Window and Macintosh operating systems. Internal touch screen kits are available but require professional installation because the must be installed inside the monitors.

Virtual Retinal Display

Added on: March 12th, 2012 by No Comments

The Virtual Retinal Display (VRD) is a personal display device under development at the University of Washington’s Human Interface Technology Laboratory in Seattle, Washington USA. The VRD scans light directly onto the viewer’s retina. The viewer perceives a wide field of view image. Because the VRD scans light directly on the retina, the VRD is not a screen based technology.

The VRD was invented at the University of Washington in the Human Interface Technology Lab (HIT) in 1991. The development began in November 1993. The aim was to produce a full color, wide field-of-view, high resolution, high brightness, low cost virtual display. Microvision Inc. has the exclusive license to commercialize the VRD technology. This technology has many potential applications, from head-mounted displays (HMDs) for military/aerospace applications to medical society.

The VRD projects a modulated beam of light (from an electronic source) directly onto the retina of the eye producing a rasterized image. The viewer has the illusion of seeing the source image as if he/she stands two feet away in front of a 14-inch monitor. In reality, the image is on the retina of its eye and not on a screen. The quality of the image he/she sees is excellent with stereo view, full color, wide field of view, no flickering characteristics.

Interplanetary Internet

Added on: March 12th, 2012 by No Comments

Increasingly, network applications must communicate with counterparts across disparate networking environments characterized by significantly different sets of physical and operational constraints; wide variations in transmission latency are particularly troublesome. The proposed Interplanetary Internet (IPN), which must encompass both terrestrial and interplanetary links, is an extreme case. An architecture based on a protocol that can operate successfully and reliably in multiple disparate environments would simplify the development and deployment of such applications. The Internet protocols are ill suited for this purpose. They are, in general, poorly suited to operation on paths in which some of the links operate intermittently or over extremely long propagation delays. The principle problem is reliable transport, but the operations of the Internet’s routing protocols would also raise troubling issues.

It is this analysis that leads us to propose architecture based on Internet-independent middleware: use exactly those protocols at all layers that are best suited to operation within each environment, but insert a new overlay network protocol between the applications and the locally optimized stacks. This new protocol layer, called the bundle layer, ties together the region-specific lower layers so that application programs can communicate across multiple regions.
The DTN architecture implements store-and-forward message switching.

A DTN is a network of regional networks, where a regional network is a network that is adapted to a particular communication region, wherein communication characteristics are relatively homogeneous. Thus, DTNs support interoperability of regional networks by accommodating long delays between and within regional networks, and by translating between regional communication characteristics.

Wibree Technology

Added on: March 11th, 2012 by No Comments

Now that wireless connections are established solutions in various sectors of consumer electronics, the question arises whether devices that draw long life from a small battery could find benefit as well in a global standard for wireless low energy technology. Makers of sensors for sports, health and fitness devices have dabbled in wireless but not together, while manufacturers of products like watches have never even considered adding wireless functionality because no options were available. Several wireless technologies have tried to address the needs of the button cell battery market, but most were proprietary and garnered little industry support. However, none of these technologies let smaller manufacturers plug in to a global standard that provides a viable link with devices like mobile phones and laptops.

However, companies that wants to make their small devices wireless need to build and sell either a dedicated display unit or an adapter that connects to a computing platform such as a mobile phone, PC or iPod. There have been few successful products that followed this route to a mass market. A new flavor of Bluetooth technology may be just the answer, and a more efficient alternative for yet another wireless standard.

Self Managing Computing

Added on: March 11th, 2012 by 1 Comment

Self managing computing helps address the complexity issues by using technology to manage technology. The idea is not new many of the major players in the industry have developed and delivered products based on this concept. Self managing computing is also known as autonomic computing.

Autonomic Computing is an initiative started by IBM in 2001. Its ultimate aim is to develop computer systems capable of self-management, to overcome the rapidly growing complexity of computing systems management, and to reduce the barrier that complexity poses to further growth. In other words, autonomic computing refers to the self-managing characteristics of distributed computing resources, adapting to unpredictable changes while hiding intrinsic complexity to operators and users.

The term autonomic is derived from human biology. The autonomic nervous system monitors your heartbeat, checks your blood sugar level and keeps your body temperature close to 98.6°F, without any conscious effort on your part. In much the same way, self managing computing components anticipate computer system needs and resolve problems with minimal human intervention.

Self managing computing systems have the ability to manage themselves and dynamically adapt to change in accordance with business policies and objectives. Self-managing systems can perform management activities based on situations they observe or sense in the IT environment. Rather than IT professionals initiating management activities, the system observes something about itself and acts accordingly. This allows the IT professional to focus on high-value tasks while the technology manages the more mundane operations. Self managing computing can result in a significant improvement in system management efficiency, when the disparate technologies that manage the environment work together to deliver performance results system wide.

However, complete autonomic systems do not yet exist. This is not a proprietary solution. It’s a radical change in the way businesses, academia, and even the government design, develop, manage and maintain computer systems. Self managing computing calls for a whole new area of study and a whole new way of conducting business.

Self managing computing is the self-management of e-business infrastructure, balancing what is managed by the IT professional and what is managed by the system. It is the evolution of e-business.

Virtual Keyboard

Added on: March 11th, 2012 by No Comments

Virtual Keyboard is just another example of today’s computer trend of smaller and faster’. Computing is now not limited to desktops and laptops, it has found its way into mobile devices like palm tops and even cell phones. But what has not changed for the last 50 or so odd years is the input device, the good old QWERTY keyboard. The virtual keyboard technology is the latest development.

The virtual keyboard technology uses sensor technology and artificial intelligence to let users work on any flat surface as if it were a keyboard. Virtual Keyboards lets you easily create multilingual text content on almost any existing platform and output it directly to PDAs or even web pages. Virtual Keyboard, being a small, handy, well-designed and easy to use application, turns into a perfect solution for cross platform text input.
The main features are: platform-independent multilingual support for keyboard text input, built-in language layouts and settings, copy/paste etc. operations support just as in a regular text editor, no change in already existing system language settings, easy and user-friendly interface and design, and small file size.

The report first gives an overview of the QWERTY keyboards and the difficulties arising from using them. It then gives a description about the virtual keyboard technology and the various types of virtual keyboards in use. Finally the advantages, drawbacks and the applications are discussed.


Added on: March 11th, 2012 by 2 Comments

Biocomputing is one of the upcoming field in the areas of mole-cularelectronics and nanotechnology. The idea behind blending biology with technology is due to the limitations faced by the semiconductor designers in decreasing the size of the silicon chips, which directly affects the processor speed. Biocomputers consists of biochips unlike the normal computers, which are silicon-based computers. This biochip consists of biomaterial such as nucleic acid, enzymes, etc.

The power of a biocomputer is that it acts as a massively parallel computer and has immense data storage capability.Thus, it can be used to solve NP-complete problems with higher efficiency.The possibilities for bio-computers include developing a credit-card size computer that could design a super-efficient global air-traffic control system. The basic idea behind bio-computing is to use molecular reactions for computational purposes.Biocomputing is one of the upcoming field in the areas of mole-cularelectronics and nanotechnology.

The idea behind blending biology with technology is due to the limitations faced by the semiconductor designers in decreasing the size of the silicon chips, which directly affects the processor speed. Biocomputers consists of biochips unlike the normal computers, which are silicon-based computers. This biochip consists of biomaterial such as nucleic acid, enzymes, etc. The power of a biocomputer is that it acts as a massively parallel computer and has immense data storage capability.Thus, it can be used to solve NP-complete problems with higher efficiency.The possibilities for bio-computers include developing a credit-card size computer that could design a super-efficient global air-traffic control system..

Viruses and Worms

Added on: March 11th, 2012 by No Comments

One of the most high pro?e threats to information integrity is the computer virus. In this paper, I am presenting what are viruses, worms, and Trojan horses and their differences, different strategies of virus spreading and case studies of Slammer and Blaster worms.

The internet consists of hundreds of millions of computers distributed around the world. Millions of people use the internet daily, taking full advantage of the available services at both personal and professional levels. The internet connectivity among computers on which the World Wide Web relies, however renders its nodes on easy target for malicious users who attempt to exhaust their resources or damage the data or create a havoc in the network.
Computer Viruses, especially in recent years, have increased dramatically in number. One of the most highpro?le threats to information integrity is the Computer Virus.

Surprisingly, PC viruses have been around for two-thirds of the IBM PC’s lifetime, appearing in 1986. With global computing on the rise, computer viruses have had more visibility in the past few years. In fact, the entertainment industry has helped by illustrating the effects of viruses in movies such as ”Independence Day”, ”The Net”, and ”Sneakers”. Along with computer viruses, computer worms are also increasing day by day. So, there is a need to immunise the internet by creating awareness in the people about these in detail. In this paper I have explained the basic concepts of viruses and worms and how they spread.

The basic organisation of the paper is as follows. In section 2, give some preliminaries: the de?nitions of computer virus, worms, trojan horses, as well as some other malicious programs and also basic characteristics of a virus.

In section 3, detailed description: describe Malicious Code Environments where virus can propagate, Virus/Worm types overview where different types have been explained, and Categories of worm where the different forms of worm is explained in broad sense. In section 4, File Infection Techniques which describe the various methods of infection mechanisms of a virus. In section 5, Steps in Worm Propagation describe the basic steps that a normal worm will follow for propagation.

In section 6 Case studies: two case studies of Slammer worm and blaster worm are discussed.

Stream Control Transmission Protocol

Added on: March 11th, 2012 by No Comments

The Stream Control Transmission Protocol (SCTP) is a new IP transport protocol, existing at an equivalent level as UDP (User Datagram Protocol) and TCP (Transmission Control Protocol), which currently provide transport layer functions to all of the main Internet applications. UDP, RTP, TCP, and SCTP are currently the IETF standards-track transport-layer protocols. Each protocol has a domain of applicability and services it provides, albeit with some overlaps.

Like TCP, SCTP provides a reliable transport service, ensuring that data is transported across the network without error and in sequence. Like TCP, SCTP is a connection-oriented mechanism, meaning that a relationship is created between the endpoints of an SCTP session prior to data being transmitted, and this relationship is maintained until all data transmission has been successfully completed.

Unlike TCP, SCTP provides a number of functions that are considered critical for signaling transport, and which at the same time can provide transport benefits to other applications requiring additional performance and reliability.

By clarifying the situations where the functionality of these protocols is applicable, this document can guide implementers and protocol designers in selecting which protocol to use.

Special attention is given to services SCTP provides which would make a decision to use SCTP the right one.

Design, Analysis, Fabrication and Testing of Composite Leaf Spring

Added on: March 11th, 2012 by No Comments

The subject gives a brief look on the suitability of composite leaf spring on vehicles and their advantages. Efforts have been made to reduce the cost of composite leaf spring to that of steel leaf spring. The achievement of weight reduction with adequate improvement of mechanical properties has made composite a very replacement material for convectional steel. Material and manufacturing process are selected upon on the cost and strength factor. The design method is selected on the basis of mass production.
From the comparative study, it is seen that the composite leaf spring are higher and more economical than convectional leaf spring.

Polymer Fiber Reinforced Concrete Pavements

Added on: March 11th, 2012 by 1 Comment

Road transportation is undoubtedly the lifeline of the nation and its development is a crucial concern. The traditional bituminous pavements and their needs for continuous maintenance and rehabilitation operations points towards the scope for cement concrete pavements. There are several advantages of cement concrete pavements over bituminous pavements. This paper explains on POLYMER FIBRE REINFORCED CONCRETE PAVEMENTS, which is a recent advancement in the field of reinforced concrete pavement design. PFRC pavements prove to be more efficient than conventional RC pavements, in several aspects, which are explained in this paper. The design procedure and paving operations of PFRC are also discussed in detail. A detailed case study of Polyester fiber waste as fiber reinforcement is included and the results of the study are interpreted. The paper also includes a brief comparison of PFRC pavements with conventional concrete pavement. The merits and demerits of PFRC pavements are also discussed. The applications of PFRC in the various construction projects in kerala are also discussed in brief.


Added on: March 11th, 2012 by No Comments

ITER (originally an acronym of International Thermonuclear Experimental Reactor) is an international nuclear fusion research and engineering project, which is currently building the world’s largest and most advanced experimental tokamak nuclear fusion reactor at Cadarache in the south of France. The ITER project aims to make the long-awaited transition from experimental studies of plasma physics to full-scale electricity-producing fusion power plants. The project is funded and run by seven member entities – the European Union (EU), India, Japan, the People’s Republic of China, Russia, South Korea and the United States. The EU, as host party for the ITER complex, is contributing 45% of the cost, with the other six parties contributing 9% each.

The ITER fusion reactor itself has been designed to produce 500 megawatts of output power for 50 megawatts of input power, or ten times the amount of energy put in. The machine is expected to demonstrate the principle of getting more energy out of the fusion process than is used to initiate it, something that has not been achieved with previous fusion reactors. Construction of the facility began in 2007, and the first plasma is expected in 2019. When ITER becomes operational, it will become the largest magnetic confinement plasma physics experiment in use, surpassing the Joint European Torus. The first commercial demonstration fusion power plant, named DEMO, is proposed to follow on from the ITER project to bring fusion energy to the commercial market.
The same principle of sun is used to produce heat in ITER plant.

Sixth Sense Technology

Added on: March 11th, 2012 by No Comments

Although miniaturized versions of computers help us to connect to the digital world even while we are travelling there aren’t any device as of now which gives a direct link between the digital world and our physical interaction with the real world. Usually the information’s are stored traditionally on a paper or a digital storage device. Sixth sense technology helps to bridge this gap between tangible and non-tangible world. Sixth Sense device is basically a wearable gestural interface that connects the physical world around us with digital information and lets us use natural hand gestures to interact with this information .The sixth sense technology was developed by Pranav Mistry, a PhD student in the Fluid Interfaces Group at the MIT Media Lab. The sixth sense technology has a Web 4.0 view of human and machine interactions. Sixth Sense integrates digital information into the physical world and its objects, making the entire world your computer. It can turn any surface into a touch-screen for computing, controlled by simple hand gestures. It is not a technology which is aimed at changing human habits but causing computers and other machines to adapt to human needs. It also supports multi user and multi touch provisions. Sixth Sense device is a mini-projector coupled with a camera and a cell phone—which acts as the computer and your connection to the Cloud, all the information stored on the web. The current prototype costs around $350. The Sixth Sense prototype is used to implement several applications that have shown the usefulness, viability and flexibility of the system.


Added on: March 11th, 2012 by 1 Comment

There is growing interest in biodiesel (fatty acid methyl ester or FAME) because of the similarity in its properties when compared to those of diesel fuels. Diesel engines operated on biodiesel have lower emissions of carbon monoxide, unburned hydrocarbons, particulate matter, and air toxics than when operated on petroleum-based diesel fuel. Production of fatty acid methyl ester (FAME) from rapeseed (non edible oil) fatty acid distillate having high free fatty acids (FFA) was investigated in this work. Conditions for transesterification process of rapeseed oil were 1.8 % H2SO4 as catalyst, MeOH/oil of molar ratio 2:0.1 and reaction temperature 65 °C, for a period of 3h. The yield of methyl ester was > 90 % in 1h.

Biodiesel is becoming widely available in most parts of the U.S. and can be substituted for petroleum-based diesel fuel (“petro diesel”) in virtually any standard unmodified diesel engine. Biodiesel offers many advantages over petroleum-based diesel:

  • It is made from domestically produced and renewable agricultural products, mainly vegetable oil or animal fat.
  • It is essentially non-toxic and biodegradable.
  • It has a high flash point (over 300ºF) and is difficult to light on fire with a match.
  • It reduces emissions of many toxic air pollutants.
  • It functions as an excellent fuel lubricant and performs similarly to low-sulfur diesel with regards to power, torque, and fuel consumption.
  • It can greatly reduce carbon emissions.

CRDI Engines

Added on: March 11th, 2012 by No Comments

Compared with petrol, diesel is the lower quality product of petroleum family. Diesel particles are larger and heavier than petrol, thus more difficult to pulverize. Imperfect pulverization leads to more unburnt particles, hence more pollutant, lower fuel efficiency and less power.

Common-rail technology is intended to improve the pulverization process. Conventional direct injection diesel engines must repeatedly generate fuel pressure for each injection. But in the CRDI engines the pressure is built up independently of the injection sequence and remains permanently available in the fuel line. CRDI system that uses an ion sensor to provide real-time combustion data for each cylinder. The common rail upstream of the cylinders acts as an accumulator, distributing the fuel to the injectors at a constant pressure of up to 1600 bar. Here high-speed solenoid valves, regulated by the electronic engine management, separately control the injection timing and the amount of fuel injected for each cylinder as a function of the cylinder’s actual need.

In other words, pressure generation and fuel injection are independent of each other. This is an important advantage of common-rail injection over conventional fuel injection systems as CRDI increases the controllability of the individual injection processes and further refines fuel atomization, saving fuel and reducing emissions. Fuel economy of 25 to 35 % is obtained over a standard diesel engine and a substantial noise reduction is achieved due to a more synchronized timing operation. The principle of CRDi is also used in petrol engines as dealt with the GDI (Gasoline Direct Injection) , which removes to a great extent the draw backs of the conventional carburetors and the MPFI systems.
CRDi stands for Common Rail Direct Injection.

Agent Based Computing

Added on: March 11th, 2012 by No Comments

Agent-based computing represents an exciting new synthesis for both Artificial Intelligence and more generally, Computer Science. It has the potential to improve the theory and the practice of modeling, designing and implementing complex computer systems. Yet, to date, there has been little systematic analysis of what makes the agent-based approach such an appealing and powerful computational model. To rectify this situation, this paper aims to tackle exactly this issue. The standpoint of this analysis is the role of agent-based software in solving complex, real world problems. In particular, it will be argued that the development of robust and scalable software systems requires autonomous agents that can complete their objectives while situated in a dynamic and uncertain environment, that can engage in rich, high-level interactions, and that can operate within flexible organizational structures.

Keywords: autonomous agents, agent-oriented software engineering, complex systems


Added on: March 10th, 2012 by 1 Comment

3D-DOCTOR Software is used to extract information from image files to create 3D model. It was developed using object-oriented technology and provides efficient tools to process and analyze 3D images, object boundaries, 3D models and other associated data items in an easy-to-use environment. It does 3D image segmentation, 3D surface modeling, rendering, volume rendering, 3D image processing, disconsolation, registration, automatic alignment, measurements, and many other functions. Software supports both grayscale and color images stored in DICOM, TIFF, Interfile, GIF, JPEG, PNG, BMP, PGM, RAW or other image file formats. 3D-DOCTOR creates 3D surface models and volume rendering from 2D cross-section images in real time on your PC. Leading hospitals, medical schools and research organizations around the world are currently using 3D-DOCTOR.

Biofuels as Blending Components

Added on: March 10th, 2012 by No Comments

The single largest source of energy in India after coal is petroleum, about two third of which is imported. The petroleum derived fuels i.e. Motor gasoline and diesel are the two major fuels extensively used today.

The high dependence on imported source of energy is an issue related to energy security of the country. And combustion of fossil fuels has been recognized as a major cause of air pollution in Indian cities. Although CNG and LPG are being promoted as clauses alternatives but both of them are short in supply and we have to depend on imports to meet the requirements. In the new Petroleum policy passed on 6th October this year though CNG and LPG are promoted but petrol and diesel continue to remain are the 2 major fuels to be used.

We therefore need to look for cleaner alternatives which could not only decrease pollution but also our dependence on other countries. Among the various alternatives biofuels like ethanol and bio-diesel which can be produced from a host of biosource can be easily be used as blending components of both petrol and diesel in existing engines without modifications. Unlike CNG and LPG new infrastructure for supply and distribution of fuel. Further these fuels production will help use surplus agriculture produce and help in rural development. Ethanol is used more in petrol engines, bio diesel finds application in diesel engines. They add oxygen to respective fuels, which in turn improves combustion efficiency and reduces harmful exhaust emissions.


Added on: March 10th, 2012 by No Comments

The energy crisis is not just a local or national issue but a global one as well. Various groups are responding to energy needs in particular ways—the U.S. and several other countries are looking into synthetic biology to address these problems. Research in this new field, as it is emerging in the United States, is largely devoted to the production of biofuels. Several institutions, which we will call “biofuels interest groups,” envision this energy source to play a significant role in the remediation of current energy challenges. But what are the current challenges that are motivating these groups to pursue this particular resource through a particular and new science such as synthetic biology? After an examination of four of these interest groups, stationed here in the U.S., we have come to the conclusion that the energy crisis to which each group responds is framed by them in a particular way such that biofuels plays a major, if not the only viable and sustainable, role in the remediation of the problem. These groups claim that synthetic biology offers unique and viable paths toward a sustainable future. We will examine exactly what kinds of future are illustrated by each institution—what they mean by a “sustainable future”—by identifying the views, resources, technologies, and management strategies of each group. In addition we will situate them in their human practices context to view not only what they plan to do, but how and to what extent they will carry out their plan. The groups we present are the Joint Bioengineering Institute (JBEI), Amyris Biotechnologies, and the Energy Biosciences Institute (EBI). In order to assess the feasibility of these models outside of the lab we will present a section which provides an overview of the current socio-political atmosphere in which they must operate. This section examines alternative approaches to the energy crisis, motivations for realizing a certain approach, and the decision making forces at play. Two distinct ideologies are represented by the US Department of Energy (DOE) and Tad Patzek to aid in this discussion.

Medical Image Fusion

Added on: March 10th, 2012 by No Comments

Image fusion is the process by which two or more images are combined into a single image retaining the important features from each of the original images. The fusion of images is often required for images acquired from different instrument modalities or capture techniques of the same scene or objects. Important applications of the fusion of images include medical imaging, microscopic imaging, remote sensing, computer vision, and robotics. Fusion techniques include the simplest method of pixel averaging to more complicated methods such as principal component analysis and wavelet transform fusion. Several approaches to image fusion can be distinguished, depending on whether the images are fused in the spatial domain or they are transformed into another domain, and their transforms fused.

With the development of new imaging sensors arises the need of a meaningful combination of all employed imaging sources. The actual fusion process can take place at different levels of information representation, a generic categorization is to consider the different levels as, sorted in ascending order of abstraction: signal, pixel, feature and symbolic level. This focuses on the so-called pixel level fusion process, where a composite image has to be built of several input images. To date, the result of pixel level image fusion is considered primarily to be presented to the human observer, especially in image sequence fusion (where the input data consists of image sequences). A possible application is the fusion of forward looking infrared (FLIR) and low light visible images (LLTV) obtained by an airborne sensor platform to aid a pilot navigate in poor weather conditions or darkness. In pixel-level image fusion, some generic requirements can be imposed on the fusion result. The fusion process should preserve all relevant information of the input imagery in the composite image (pattern conservation) The fusion scheme should not introduce any artifacts or inconsistencies which would distract the human observer or following processing stages .The fusion process should be shift and rotational invariant, i.e. the fusion result should not depend on the location or orientation of an object the input imagery .In case of image sequence fusion arises the additional problem of temporal stability and consistency of the fused image sequence. The human visual system is primarily sensitive to moving light stimuli, so moving artifacts or time depended contrast changes introduced by the fusion process are highly distracting to the human observer. So, in case of image sequence fusion the two additional requirements apply. Temporal stability: The fused image sequence should be temporal stable, i.e. gray level changes in the fused sequence must only be caused by gray level changes in the input sequences, they must not be introduced by the fusion scheme itself; Temporal consistency: Gray level changes occurring in the input sequences must be present in the fused sequence without any delay or contrast change.

Cellular Neural Networks

Added on: March 10th, 2012 by No Comments

Cellular Neural Networks (CNN) a revolutionary concept and an experimentally proven computing paradigm for analog computers. A standard CNN architecture consists of an m*n rectangular array of cells c(i,j) with Cartesian co-ordinates. Considering inputs and outputs of a cell as binary arguments. It can realize Boolean functions. using this technology, analog computers mimic anatomy & physiology of many sensory& processing organs with stored programmability. this has been called “sensor revolution” with cheap sensors &mems arrays in desired forms of artificial eyes, ears, nose etc. Such a computer is capable of computing 3 trillion equivalent digital operations/sec, a performance that can be only matched by super computers. CNN chips are mainly used in processing brain-like tasks due to its unique architecture which are non-numeric &spatio temporal in nature and will require no more than accuracy of common neurons.


Added on: March 10th, 2012 by No Comments

A new concept for flight to orbit is described in this paper. It involves mass addition to an ascending, air-breathing, hypersonic lifting vehicle. General laws for flight to orbit with mass addition are developed, and it is shown that payload capabilities are higher than even the most advanced rocket launchers. Hyperplanes are multipurpose, fully reusable aerospace vehicles. These vehicles are using air-breathing engines and can take-off from any conventional airport. They are multipurpose vehicles in the sense that they be used for passenger or freight transport as well as satellite launching. Detailed studies emerged a new concept for hydrogen-fuelled, horizontal take off, fully reusable, single stage hypersonic vehicle, called HYPERPLANE. Avatar, a mini-aerospace plane, design is a technology demonstrator for hypersonic transportation. It is a scaled down version of hyperplane.

Hyperplanes are multipurpose, fully reusable aerospace vehicles. These vehicles are using air-breathing engines and can take-off from any conventional airport. They are multipurpose vehicles in the sense that they be used for passenger or freight transport as well as satellite launching. The key enabling technology for hyper planes is scramjet engines which uses air breathing engine technology. The hyperplanes requires a booster rocket which will give it the supersonic velocity required for scramjet operation. Thus the hyperplanes require normal jet engines for horizontal take off, then a rocket to boost the velocity and a scramjet to sustain the hypersonic speed. Once operational it can even launch satellites at lesser costs compared to rockets. Many nations are working on hyperplane technology, which include USA, Russia, India etc.The only successful hypersonic flight was shown by X-43 of USA.The hyperplane Avatar which is developed by India is expected to be used as a reusable missile launcher. This would be the most modern technology which will revolutionize the modern day’s travel.Here we will discus about the working ,advantages ,disadvantages and various examples of hyperplanes.


Added on: March 9th, 2012 by No Comments

In recent years, Broadband technology has rapidly become an established, global commodity required by a high percentage of the population. The demand has risen rapidly, with a worldwide installed base of 57 million lines in 2002 rising to an estimated 80 million lines by the end of 2003. This healthy growth curve is expected to continue steadily over the next few years and reach the 200 million mark by 2006. DSL operators, who initially focused their deployments in densely-populated urban and metropolitan areas, are now challenged to provide broadband services in suburban and rural areas where new markets are quickly taking root. Governments are prioritizing broadband as a key political objective for all citizens to overcome the “broadband gap” also known as “digital divide”.

Wireless DSL (WDSL) offers an effective, complementary solution to wireline DSL, allowing DSL operators to provide broadband service to additional areas and populations that would otherwise find themselves outside the broadband loop. Government regulatory bodies are realizing the inherent worth in wireless technologies as a means for solving digital-divide challenges in the last mile and have accordingly initiated a deregulation process in recent years for both licensed and unlicensed bands to support this application. Recent technological advancements and the formation of a global standard and interoperability forum – WiMAX, set the stage for WDSL to take a significant role in the broadband market. Revenues from services delivered via Broadband Wireless Access have already reached $323 million and are expected to jump to $1.75 billion.

Abrasive Blast Cleaning

Added on: March 9th, 2012 by No Comments

An abrasive is a material, often a mineral, that is used to shape or finish a workpiece through rubbing which leads to part of the workpiece being worn away. While finishing a material often means polishing it to gain a smooth, reflective surface it can also involve roughening as in satin, matte or beaded finishes.

Abrasives are extremely commonplace and are used very extensively in a wide variety of industrial, domestic, and technological applications. This gives rise to a large variation in the physical and chemical composition of abrasives as well as the shape of the abrasive. Common uses for abrasives include grinding, polishing, buffing, honing, cutting, drilling, sharpening, and sanding (see abrasive machining). (For simplicity, “mineral” in this article will be used loosely to refer to both minerals and mineral-like substances whether man-made or not.)

Files act by abrasion but are not classed as abrasives as they are a shaped bar of metal. However, diamond files are a form of coated abrasive (as they are metal rods coated with diamond powder).

Abrasives give rise to a form of wound called an abrasion or even an excoriation. Abrasions may arise following strong contract with surfaces made things such as concrete, stone, wood, carpet, and roads, though these surfaces are not intended for use as abrasives.

Biometric Systems

Added on: March 9th, 2012 by No Comments

A biometric is defined as a unique, measurable, biological characteristic or trait for automatically recognizing or verifying the identity of a human being. Statistically analyzing these biological characteristics has become known as the science of biometrics. These days, biometric technologies are typically used to analyze human characteristics for security purposes. Five of the most common physical biometric patterns analyzed for security purposes are the fingerprint, hand, eye, face, and voice. The use of biometric characteristics as a means of identification. In this paper we will give a brief overview of the field of biometrics and summarize some of its advantages, disadvantages, strengths, limitations, and related privacy concerns. We will also look at how this process has been refined over time and how it currently works.

DNA Computing

Added on: March 9th, 2012 by No Comments

Molecular biologists are beginning to unravel the information-processing tools such as enzymes that evolution has spent billions of years refining. These tools are now been taken in large numbers of DNA molecules and using them as biological computer processors.

Dr. Leonard Adleman, a well-known scientist, found a way to exploit the speed and efficiency of the biological reactions to solve the “Hamiltonian path problem”, also known as the “traveling salesman problem”.

Based on Dr. Adleman’s experiment, we will explain DNA computing, its algorithms, how to manage DNA based computing and the advantages and disadvantages of DNA computing.

DNA Computing in Security

Added on: March 9th, 2012 by No Comments

As modern encryption algorithms are broken, the world of information security looks in new directions to protect the data it transmits. The concept of using DNA computing in the fields of cryptography and steganography has been identified as a possible technology that may bring forward a new hope for unbreakable algorithms. Is the fledgling field of DNA computing the next cornerstone in the world of information security or is our time better spent following other paths for our data encryption algorithms of the future?

This paper will outline some of the basics of DNA and DNA computing and its use in the areas of cryptography, steganography and authentication.

Research has been performed in both cryptographic and steganographic situations with respect to DNA computing. The constraints of its high tech lab requirements and computational limitations combined with the labour intensive extrapolation means, illustrate that the field of DNA computing is far from any kind of efficient use in today’s security world. DNA authentication on the other hand, has exhibited great promise with real world examples already surfacing on the marketplace today.

Ozone Water Treatment

Added on: March 8th, 2012 by No Comments

Purifying Water With Ozone.

Ozone has been used in Europe for water treatment since early in the 20th century. Initial applications were to disinfect relatively clean spring or well water, but they increasingly evolved to also oxidize contaminants common to surface waters. Since World War II, ozonation has become the primary method to assure clean water in Switzerland, West Germany and France. More recently, major fresh water and waste water treatment facilities using ozone water treatment methods have been constructed throughout the world.

Relatively, the use of ozone for water treatment and purification in the United States has been much more limited. However, the use of ozone has been increasing here in the US, particularly over the last decade as the negative effects of chlorination have become more apparent. For example, a modern water treatment plant in the USA has been built by the City of Los Angeles to use ozone for primary disinfection and microflocculation of as much as 600 million gallons of water per day. An East Texas power utility will be the first small water utility service in Texas to use ozone water treatment technology for drinking water purification. They have hired BiOzone for this task.

In the field of creative ozone water treatment, BiOzone designs, manufactures, and installs the finest ozone generator systems produced today. Each component interfaces harmoniously with the others to achieve the most cost effective and optimum performance. These process trains are installed around the world in a variety of industries. The solutions we offer are nowhere else available in the world.

Symbian OS

Added on: March 8th, 2012 by No Comments

Symbian OS is the operating system licensed by the world’s leading mobile phone manufacturers. Symbian OS is designed for the specific requirements of open, data-enabled 2G, 2.5G and 3G mobile phone.Key features of symbian,how symbian supports modern features of mobile phones are discussed briefly.

The Symbian platform was created by merging and integrating software assets contributed by Nokia, NTT DoCoMo, Sony Ericsson and Symbian Ltd., including Symbian OS assets at its core, the S60 platform, and parts of the UIQ and MOAP(S) user interfaces.

Symbian is a mobile operating system (OS) and computing platform designed for smartphones and currently maintained by Accenture.[7] The Symbian platform is the successor to Symbian OS and Nokia Series 60; unlike Symbian OS, which needed an additional user interface system, Symbian includes a user interface component based on S60 5th Edition. The latest version, Symbian^3, was officially released in Q4 2010, first used in the Nokia N8. In May 2011 an update, Symbian Anna, was officially announced, followed by Nokia Belle (previously Symbian Belle) in August 2011.[8][9]
Symbian OS was originally developed by Symbian Ltd.[10] It is a descendant of Psion’s EPOC and runs exclusively on ARM processors, although an unreleased x86 port existed.

Hydro Drive

Added on: March 8th, 2012 by No Comments

Hydro forming is the process by which water pressure is used to form complex shapes from sheet or tube material.

Applications of hydro forming in automotive industry are gaining globally in popularity. The trend in auto manufacturing of making parts lighter and more complicated with necessary strength reinforcement only where required is on the rise. The capability of hydro forming is more fully used to produce such complicated parts.


Added on: March 8th, 2012 by No Comments

The primary purpose of the tyre is to provide traction.

  1. Tyres also help the suspension absorb road shocks, but this is a side benefit.
  2. They must perform under variety of conditions. The road might be wet or dry; paved with asphalt, concrete or gravel; or there might be no road at all.
  3. The car might be traveling slowly on a straight road, or moving quickly through curves or over hills. All of these conditions call for special requirements that must be present, at least to some degree, in all tyres.
  4. In addition to providing good traction, tyres are also designed to carry weight of the vehicle, to withstand side thrust over varying speeds and conditions, and to transfer braking and driving torque to the road.
  5. As the tyre rolls on the road, friction is created between the tyre and the road. This friction gives the tyre its traction.
  6. Although good traction is desirable, it must be limited.
  7. Too much traction means there is too much friction.
  8. Too much friction means there is lot of rolling resistance.
  9. Rolling resistance wastes engine power and fuel, therefore it must be kept to a minimal level. This dilemma is a major concern in designing today’s tyres.
  10. The primary purpose of the tyre is to provide traction along with carrying the weight of the vehicle.

Grid Computing

Added on: March 8th, 2012 by No Comments

The Grid has the potential to fundamentally change the way science and engineering are done. Aggregate power of computing resources connected by networks—of the Grid— exceeds that of any single supercomputer by many orders of magnitude. At the same time, our ability to carry out computations of the scale and level of detail required, for example, to study the Universe, or simulate a rocket engine, are severely constrained by available computing power. Hence, such applications should be one of the main driving forces behind the development of Grid computing.
Grid computing is emerging as a new environment for solving difficult problems. Linear and nonlinear optimization problems can be computationally expensive. The resource access and management is one of the most important key factors for grid computing. It requires a mechanism with automatically making decisions ability to support computing tasks collaborating and scheduling.

Grid computing is an active research area which promises to provide a flexible infrastructure for complex, dynamic and distributed resource sharing and sophisticated problem solving environments. The Grid is not only a low level infrastructure for supporting computation, but can also facilitate and enable information and knowledge sharing at the higher semantic levels, to support knowledge integration and dissemination.

Light Tree

Added on: March 8th, 2012 by No Comments

The concept of a light-tree is introduced in a wavelength-routed optical network. A light-tree is a point-to-multipoint generalization of a lightpath. A lightpath is a point-to-point all-optical wavelength channel connecting a transmitter at a source node to a receiver at a destination node. Lightpath communication can significantly reduce the number of hops (or lightpaths) a packet has to traverse; and this reduction can, in turn, significantly improve the network’s throughput. We extend the lightpath concept by incorporating an optical multicasting capability at the routing nodes in order to increase the logical connectivity of the network and further decrease its hop distance. We refer to such a point-to-multipoint extension as a light-tree. Light-trees cannot only provide improved performance for unicast traffic, but they naturally can better support multicast traffic and broadcast traffic. In this study, we shall concentrate on the application and advantages of light-trees to unicast and broadcast traffic. We formulate the light-tree-based virtual topology design problem as an optimization problem with one of two possible objective functions: for a given traffic matrix,

(i) Minimize the network-wide average packet hop distance, or,
(ii) Minimize the total number of transceivers in the network. We demonstrate that an optimum light-tree-based virtual topology has clear advantages over an optimum light path-based virtual topology with respect to the above two objectives.

Utility Fog (Nanofog)

Added on: March 7th, 2012 by No Comments

Nanotechnology is based on the concept of tiny, self – replicating robots. The Utility Fog is a very simple extension of this idea. Utility Fog is highly advanced nanotechnology which the Technocratic Union has developed as the ultimate multi-purpose tool. It is a user friendly, completely programmable collection of nanomachines that can form a vast range of machinery, from office pins to space ships. It can simulate any material from gas, liquid and solid and it can even be used in sufficient quantities to implement the ultimate in virtual reality.

With the right programming, the robots can exert any force in any direction on the surface of any object. They can support the object so that it apparently floats in air. They can support a person applying the same pressure that a chair would. A programme running in Utility Fog can thus simulate the physical existence of any object.

Utility Fog should be capable of simulating most everyday materials, dynamically changing its form and forming a substrate for an integrated virtual reality. This paper will examine the basic concept, and explore some of the applications of this material.

Four Wheel Steering System

Added on: March 7th, 2012 by 1 Comment

Four-wheel steering, 4WS, also called rear-wheel steering or all-wheel steering, provides a means to actively steer the rear wheels during turning maneuvers. It should not be confused with four-wheel drive in which all four wheels of a vehicle are powered. It improves handling and help the vehicle make tighter turns.

Production-built cars tend to understeer or, in few instances, oversteer. If a car could automatically compensate for an understeer/oversteer problem, the driver would enjoy nearly neutral steering under varying conditions. 4WS is a serious effort on the part of automotive design engineers to provide near-neutral steering.

The front wheels do most of the steering. Rear wheel turning is generally limited to 50-60 during an opposite direction turn. During a same direction turn, rear wheel steering is limited to about 10-1.50.

When both the front and rear wheels steer toward the same direction, they are said to be in-phase and this produces a kind of sideways movement of the car at low speeds. When the front and rear wheels are steered in opposite direction, this is called anti-phase, counter-phase or opposite-phase and it produces a sharper, tighter turn.

Polyfuses (PPTC)

Added on: March 7th, 2012 by 1 Comment

A fuse is a one time over current protection device employing fusible link that melts after the current exceeds a certain level for a certain length of time. Typically, a wire or chemical compound breaks the circuit when the current exceeds the rated value.

Polyfuse is a new standard for circuit protection .It is resettable. Technically, Polyfuses are not fuses but polymeric positive temperature coefficient thermistors (PPTC). Re-settable fuses provide over current protection and automatic restoration.

Common Rail Direct Injection

Added on: March 7th, 2012 by No Comments

Compared with petrol, diesel is the lower quality product of petroleum family. Diesel particles are larger and heavier than petrol, thus more difficult to pulverize. Imperfect pulverization leads to more unburnt particles, hence more pollutant, lower fuel efficiency
and less power.

Common-rail technology is intended to improve the pulverization process. Conventional direct injection diesel engines must repeatedly generate fuel pressure for each injection. But in the CRDI engines the pressure is built up independently of the injection sequence and remains permanently available in the fuel line. CRDI system that uses an ion sensor to provide real-time combustion data for each cylinder. The common rail upstream of the cylinders acts as an accumulator, distributing the fuel to the injectors at a constant pressure of up to 1600 bar. Here high-speed solenoid valves, regulated by the electronic engine management, separately control the injection timing and the amount of fuel injected for each cylinder as a function of the cylinder’s actual need.

In other words, pressure generation and fuel injection are independent of each other. This is an important advantage of common-rail injection over conventional fuel injection systems as CRDI increases the controllability of the individual injection processes and further refines fuel atomization, saving fuel and reducing emissions. Fuel economy of 25 to 35 % is obtained over a standard diesel engine and a substantial noise reduction is achieved due to a more synchronized timing operation. The principle of CRDi is also used in petrol engines as dealt with the GDI (Gasoline Direct Injection) , which removes to a great extent the draw backs of the conventional carburetors and the MPFI systems.

Cryogenics and its Space Applications

Added on: March 7th, 2012 by Afsal Meerankutty 3 Comments

Cryogenics is the study of how to get to low temperatures and of how materials behave when they get there. Besides the familiar temperature scales of Fahrenheit and Celsius (Centigrade), cryogenicists use other temperature scales, the Kelvin and Rankine temperature scale. Although the apparatus used for spacecraft is specialized, some of the general approaches are the same as used in everyday life. Cryogenics involves the study of low temperatures from about 100 Kelvin to absolute zero.

One interesting feature of materials at low temperatures is that the air condenses into a liquid. The two main gases in air are oxygen and nitrogen. Liquid oxygen, “lox” for short, is used in rocket propulsion. Liquid nitrogen is used as a coolant. Helium, which is much rarer than oxygen or nitrogen, is also used as a coolant. In more detail, cryogenics is the study of how to produce low temperatures or also the study of what happens to materials when you have cooled them down.

Magnetic Refrigeration

Added on: March 7th, 2012 by Afsal Meerankutty 2 Comments

Magnetic refrigeration is a technology that has proven to be environmentally safe. models have shown 25% efficiency improvement over vapor compression systems. In order to make the Magnetic Refrigerator commercially viable, scientists need to know how to achieve larger temperature swings. Two advantages to using Magnetic Refrigeration over vapor compressed systems are no hazardous chemicals used and they can be up to 60% efficient.
There are still some thermal and magnetic hysteresis problems to be solved for these first-order phase transition materials that exhibit the GMCE to become really useful; this is a subject of current research. This effect is currently being explored to produce better refrigeration techniques, especially for use in spacecraft. This technique is already used to achieve cryogenic temperatures in the laboratory setting (below 10K).

The objective of this effort is to determine the feasibility of designing, fabricating and testing a sensor cooler, which uses solid materials as the refrigerant. These materials demonstrate the unique property known as the magneto caloric effect, which means that they increase and decrease in temperature when magnetized/demagnetized. This effect has been observed for many years and was used for cooling near absolute zero. Recently, materials are being developed which have sufficient temperature and entropy change to make them useful for a wide range of temperature applications. The proposed effort includes magneto caloric effect material selection, analyses, design and integration of components into a preliminary design. Benefits of this design are lower cost, longer life, lower weight and higher efficiency because it only requires one moving part – the rotating disk on which the magneto caloric material is mounted. The unit uses no gas compressor, no pumps, no working fluid, no valves, and no ozone-destroying chlorofluorocarbons/hydro chlorofluorocarbons (CFC’s/HCFC’s). Potential commercial applications include cooling of electronics, super conducting components used in telecommunications equipment (cell phone base stations), home and commercial refrigerators, heat pumps, air conditioning for homes, offices and automobiles, and virtually any place that refrigeration is needed.

Cell Phone Virus and Security

Added on: March 7th, 2012 by No Comments

Rapid advances in low-power computing, communications, and storage technologies continue to broaden the horizons of mobile devices, such as cell phones and personal digital assistants (PDAs). As the use of these devices extends into applications that srequire them to capture, store, access, or communicate sensitive data, (e.g., mobile ecommerce,financial transactions, acquisition and playback of copyrighted content, etc.) security becomes an immediate concern. Left unaddressed, security concerns threaten to impede the deployment of new applications and value-added services, which is an important engine of growth for the wireless, mobile appliance and semiconductor industries. According to a survey of mobile appliance users, 52% cited security concerns as the biggest impediment to their adoption of mobile commerce.

A cell-phone virus is basically the same thing as a computer virus — an unwanted executable file that “infects” a device and then copies itself to other devices. But whereas a computer virus or worm spreads through e-mail attachments and Internet downloads, a cell-phone virus or worm spreads via Internet downloads, MMS (multimedia messaging service) attachments and Bluetooth transfers. The most common type of cell-phone infection right now occurs when a cell phone downloads an infected file from a PC or the Internet, but phone-to-phone viruses are on the rise.
Current phone-to-phone viruses almost exclusively infect phones running the Symbian operating system. The large number of proprietary operating systems in the cell-phone world is one of the obstacles to mass infection. Cell-phone-virus writers have no Windows-level marketshare to target, so any virus will only affect a small percentage of phones.

Infected files usually show up disguised as applications like games, security patches, add-on functionalities and free stuff. Infected text messages sometimes steal the subject line from a message you’ve received from a friend, which of course increases the likelihood of your opening it — but opening the message isn’t enough to get infected. You have to choose to open the message attachment and agree to install the program, which is another obstacle to mass infection: To date, no reported phone-to-phone virus auto-installs. The installation obstacles and the methods of spreadinglimit the amount of damage the current generation of cell-phone virus can do.

Standard operating systems and Bluetooth technology will be a trend for future cell phone features. These will enable cellphone viruses to spread either through SMS or by sending Bluetooth requests when cellphones are physically close enough. The difference in spreading methods gives these two types of viruses’ different epidemiological characteristics. SMS viruses’ spread is mainly based on people’s social connections, whereas the spreading of Bluetooth viruses is affected by people’s mobility patterns and population distribution. Using cellphone data recording calls, SMS and locations of more than 6 million users, we study the spread of SMS and Bluetooth viruses and characterize how the social network and the mobility of mobile phone users affect such spreading processes.