Hello Guest. Sign Up to view and download full seminar reports               

SEMINAR TOPICS CATEGORY

Engineering Topics Category

Surface Plasmon Resonance

Added on: March 20th, 2012 by No Comments

Surface plasmon resonance (SPR) is a phenomenon occurring at metal surfaces(typically gold and silver) when an incident light beam strikes the surface at a particular angle. Depending on the thickness of a molecular layer at the metal surface, the SPR phenomenon results in a graded reduction in intensity of the reflected light. Biomedical applications take advantage of the exquisite sensitivity of SPR to the refractive index of the medium next to the metal surface, which makes it possible to measure accurately the adsorption of molecules on the metal surface an their eventual interactions with specific ligands. The last ten years have seen a tremendous development of SPR use in biomedical applications.

The technique is applied not only to the measurement in real time of the kinetics of ligands receptor interactions and to the screening of lead compounds in the pharmaceutical industry, but also to the measurement DNA hybridization, enzyme- substrate interactions, in polyclonal antibody characterization, epitope mapping, protein conformation studies and label free immunoassays. Conventional SPR is applied in specialized biosensing instruments. These instruments use expensive sensor chips of limited reuse capacity and require complex chemistry for ligand or protein immobilization. Laboratory has successfully applied SPR with colloidal gold particles in buffered solutions. This application offers many advantages over conventional SPR. The support is cheap, easily synthesized, and can be coated with various proteins or protein ligand complexes by charge adsorption. With colloidal gold, the SPR phenomenon can be monitored in any UV spectrophotometer. For high throughput applications we have adapted the technology in an automated clinical chemistry analyzer. This simple technology finds application in label free quantitative immunoassay techniques for proteins and small analytes, in conformational studies with proteins as well as real time association dissociation measurements of receptor ligand interactions for high throughput screening and lead optimization.

Use of Discrete Fiber in Road Construction

Added on: March 19th, 2012 by 2 Comments

New materials and construction techniques are required to provide Civil Engineering with alternatives to traditional road construction practices. Traditional techniques have not been able to bear the mixed traffic load for a long time. Therefore the pavement requires overlaying. To overcome this problem fiber inclusion in pavements is adopted nowadays. This paper highlights on the use of discrete fiber in road construction. Recently Geosynthetics have been used to reinforce and separate base course material for aggregate-surfaced roads and flexible pavements. Inclusion of discrete fibers increases shear strength and ductility.

Dynamic Speed Governor

Added on: March 19th, 2012 by 2 Comments

The Dynamic Speed Governor is a system that can be implemented effectively for an efficient and perfectly controlled traffic system. This system can limit the speed of a moving vehicle over an area where the speed has to be restricted and retained within a predetermined value. The Dynamic Speed Governor consists of mainly two parts, the Transmitter section and the Receiver section. The transmitter section is mounted on the signal board, on which the speed limit over that area is indicated. Receiver section is kept inside the vehicle. The radio frequency signal from the transmitter is transmitted and the receiver receives it. If the speed of the vehicle is greater than the speed limit proposed for that particular area, which in turn is transmitted by the transmitter section, the speed governor comes into action and restricts the driver from going beyond that rated speed. If the system detects that the speed of the vehicle has gone beyond the speed limit, a signal is generated from the dynamic speed governor circuit. This signal in turn is used to drive the mechanical part of the vehicle, which closes the fuel nozzle of the vehicle thereby restricting the vehicle from going beyond that speed. In this particular reproduction of the system, to indicate the signal produced, the signal output from the circuit is used to trigger a mono-stable multi-vibrator, which in turn sounds a buzzer.

Tri-Gate Transistors

Added on: March 18th, 2012 by No Comments

Tri-Gate transistors, the first to be truly three-dimensional, mark a major revolution in the Semiconductor industry. The semiconductor industry continues to push technological innovation to keep pace with Moore’s Law, shrinking transistors so that ever more can be packed on a chip. However, at future technology nodes, the ability to shrink transistors becomes more and more problematic, in part due to worsening short channel effects and an increase in parasitic leakages with scaling of the gate-length dimension. In this regard Tri-gate transistor architecture makes it possible to continue Moore’s law at 22nm and below without a major transistor redesign. The physics, technology and the advantages of the device is briefly discussed in this paper.

Shallow Water Acoustic Networks

Added on: March 18th, 2012 by No Comments

Shallow water acoustic network s are generally formed by acoustically connected ocean bottom sensor nodes, autonomous underwater vehicles (AUVs), and surface stations that serve as gateways and provide radio communication link s to on -shore stations. The QoS of such network s is limited by the low bandwidth of acoustic transmission channels, high latency resulting from the slow propagation of sound, and elevated noise levels in some environments. The long -term goal in the design of underwater acoustic network s is to provide for a self- configuring network of distributed nodes with network link s that automatically adapt to the environment through selection of the optimum system parameters. Here considers several aspects in the design of shallow water acoustic network s that maximize throughput and reliability while minimizing power consumption And In the last two decades, underwater acoustic communications has experienced significant progress. The traditional approach for ocean -bottom or ocean-column monitoring is to deploy oceanographic sensors, record the data, and recover the instruments. But this approach failed in real -time monitoring. The ideal solution for real -time monitoring of selected ocean areas for long periods of time is to connect various instruments through wireless link s within a network structure. And the Basic underwater acoustic network s are formed by establishing bidirectional acoustic communication between nodes such as autonomous underwater vehicles (AUVs) and fixed sensors. The network is then connected to a surface station, which can further be connected to terrestrial networks such as the Internet.

Keywords
Under water sensor network, acoustic network, acoustic communication architectures.

Confidential Data Storage and Deletion

Added on: March 18th, 2012 by No Comments

With the decrease in cost of electronic storage media, more and more sensitive data gets stored in those media. Laptop computers regularly go missing, either because they are lost or because they are stolen. These laptops contain confidential information, in the form of documents, presentations, emails, cached data, and network access credentials. This confidential information is typically far more valuable than the laptop hardware, if it reaches right people. There are two major aspects to safeguard the privacy of data on these storage media/laptops. First, data must be stored in a confidential manner. Second, we must make sure that confidential data once deleted can no longer be restored. Various methods exist to store confidential data such as encryption programs, encryption file system etc. Microsoft BitLocker Drive Encryption provides encryption for hard disk volume and is available with Windows Vista Ultimate and Enterprise editions. This seminar describes the most commonly used encryption algorithm, Advanced Encryption System (AES) which is used for many of the confidential data storage methods. This seminar also describes some of the confidential data erasure methods such as physical destruction, data overwriting methods and Key erasure.

Keywords: Privacy of data, confidential data storage, Encryption, Advanced Encryption Standard (AES), Microsoft Bit Locker, Confidential data erasure, Data overwriting, Key erasure.

Magneto Abrasive Flow Machining

Added on: March 18th, 2012 by No Comments

Magneto abrasive flow machining (MAFM) is a new technique in machining. The orbital flow machining process has been recently claimed to be another improvement over AFM, which performs three-dimensional machining of complex components. These processes can be classified as hybrid machining processes (HMP)—a recent concept in the advancement of non-conventional machining. The reasons for developing a hybrid machining process is to make use of combined or mutually enhanced advantages and to avoid or reduce some of the adverse effects the constituent processes produce when they are individually applied. In almost all non-conventional machining processes such as electric discharge machining, electrochemical machining, laser beam machining, etc., low material removal rate is considered a general problem and attempts are continuing to develop techniques to overcome it. The present paper reports the preliminary results of an on-going research project being conducted with the aim of exploring techniques for improving material removal (MR) in AFM. One such technique studied uses a magnetic field around the work piece. Magnetic fields have been successfully exploited in the past, such as machining force in magnetic abrasive finishing (MAF), used for micro machining and finishing of components, particularly circular tubes. The process under investigation is the combination of AFM and MAF, and is given the name Magneto Abrasive Flow Machining (MAFM).

Drive-By-Wire Systems

Added on: March 17th, 2012 by No Comments

The idea of ‘X-by-wire’, the control of a car’s vital functions like steering and breaks by electrical rather than mechanical links, it tantalizing to some and terrifying to other.

Aircraft may have been equipped with Fly-by-wire system for years, but do we really feel comfortable at the idea of computer sitting between ourselves and steering and breaks of our car?

What might, however, at first sound alarming actually has many benefits and whatever we as costumers may think, X-by-wire is already emerging fast. Electronic throttles, the Mercedes sensotronic break controls (SBC) and now BMW’s active front steering (ABS) in the new launches are all examples of drive-by-wire technologies.

  • Mercedes-Benz’s cutting edge sensotronic break control technology, used in new SL, delivers more breaking movement in an emergency and helps stabilize skid early.
  • Delphi’s X-by –wire is a family of advanced breaking, steering, throttle and suspension control system which function without conventional mechanical component.
  • GM’s hydrogen-power X-by-wire concept uses electrical signals instead of mechanical linkages or hydraulics to operate the motor, breaks and steering function.

Highly organized network of wired, sensors, controllers and actuators control the mechanical function of the vehicle’s steering, breaking, throttle and suspension control system.

Electromagnetic Transducer

Added on: March 17th, 2012 by No Comments

This paper describes a novel electromagnetic
transducer called the Four Quadrant Transducer (4QT) for
hybrid electric vehicles. The system consists of one electrical
machine unit (including two rotors) and two inverters, which
enable the vehicle’s Internal Combustion Engine (ICE) to run at
its optimum working points regarding efficiency, almost
independently of the changing load requirements at the wheels. In
other words the ICE is operated at high torque and low speed as
much as possible. As a consequence, reduced fuel consumption
will be achieved.

The basic structure of the Four Quadrant Transducer system,
simulation results and ideas about suitable topologies for
designing a compact machine unit are reported. The simulated
system of a passenger car is equipped with a single step gearbox
making it simple and cost effective. Since the engine is not
mechanically connected to the wheels and the electrical
components have lower power ratings than the engine itself, the
system takes advantage of the best characteristics of the series-
and the parallel hybrid, respectively. The proposed concept looks
promising and fuel savings of more than 40% compared with a
conventional vehicle can be achieved.

Reactive Powder Concrete

Added on: March 17th, 2012 by No Comments

Concrete is a critical material for the construction of infrastructure facilities throughout the world. A new material known as reactive powder concrete (RPC) is becoming available that differs significantly from traditional concretes. RPC has no large aggregates, and contains small steel fibers that provide additional strength and in some cases can replace traditional mild steel reinforcement. Due to its high density and lack of aggregates, ultrasonic inspections at frequencies ten to twenty times that of traditional concrete inspections are possible. These properties make it possible to evaluate anisotropy in the material using ultrasonic waves, and thereby measure quantitatively the elastic properties of the material. The research reported in this paper examines elastic properties of this new material as modeled as an orthotropic elastic solid and discusses ultrasonic methods for evaluating Young’s modulus nondestructively. Calculation of shear moduli and Poisson’s ratio based on ultrasonic velocity measurements are also reported. Ultrasonic results are compared with traditional destructive methods.

Threats of HEMP and HPM Devices

Added on: March 16th, 2012 by No Comments

Electromagnetic Pulse (EMP) is an instantaneous, intense energy field that can overload or disrupt at a distance numerous electrical systems and high technology microcircuits, which are especially sensitive to power surges. A large scale EMP effect can be produced by a single nuclear explosion detonated high in the atmosphere. This method is referred to as High-Altitude EMP (HEMP). A similar, smaller-scale EMP effect can be created using non-nuclear devices with powerful batteries or reactive chemicals. This method is called High Power Microwave (HPM). Several nations, including reported sponsors of terrorism, may currently have a capability to use EMP as a weapon for cyber warfare or cyber terrorism to disrupt communications and other parts of the U.S. critical infrastructure. Also, some equipment and weapons used by the U.S. military may be vulnerable to the effects of EMP.

The threat of an EMP attack against the United States is hard to assess, but some observers indicate that it is growing along with worldwide access to newer technologies and the proliferation of nuclear weapons. In the past, the threat of mutually assured destruction provided a lasting deterrent against the exchange of multiple high-yield nuclear warheads. However, now even a single, specially- designed low-yield nuclear explosion high above the United States, or over a battlefield, can produce a large-scale EMP effect that could result in a widespread loss of electronics, but no direct fatalities, and may not necessarily evoke a large nuclear retaliatory strike by the U.S. military. This, coupled with the possible vulnerability of U.S. commercial electronics and U.S. military battlefield equipment to the effects of EMP, may create a new incentive for other countries to develop or acquire a nuclear capability.

Policy issues raised by this threat include (1) what is the United States doing to protect civilian critical infrastructure systems against the threat of EMP, (2) how does the vulnerability of U.S. civilian and military electronics to EMP attack encourage other nations to develop or acquire nuclear weapons, and (3) how likely are terrorist organizations to launch a smaller-scale EMP attack against the United States?

Electronic Fuel Injection System

Added on: March 15th, 2012 by 1 Comment

In developed and developing countries considerable emphasis is being laid on the minimization of pollutants from internal combustion engines. A two-stroke cycle engine produces a considerable amount of pollutants when gasoline is used as a fuel due to short-circuiting. These pollutants, which include unburnt hydrocarbons and carbon monoxide, which are harmful to beings. There is a strong need to develop a kind of new technology which could minimize pollution from these engines.

Direct fuel injection has been demonstrated to significantly reduce unburned hydrocarbon emissions by timing the injection of fuel in such way as to prevent the escape of unburned fuel from the exhaust port during the scavenging process.

The increased use of petroleum fuels by automobiles has not only caused fuel scarcities, price hikes, higher import bills, and economic imbalance but also causes health hazards due to its toxic emissions. Conventional fuels used in automobiles emit toxic pollutants, which cause asthma, chronic cough, skin degradation, breathlessness, eye and throat problems, and even cancer.

In recent years, environmental improvement (CO2, NOx and Ozone reduction) and energy issues have become more and more important in worldwide concerns. Natural gas is a good alternative fuel to improve these problems because of its abundant availability and clean burning characteristics.

Gi-Fi

Added on: March 15th, 2012 by 6 Comments

Gi-Fi or Gigabit Wireless is the world’s first transceiver integrated on a single chip that operates at 60GHz on the CMOS process. It will allow wireless transfer of audio and video data up to 5 gigabits per second, ten times the current maximum wireless transfer rate, at one-tenth of the cost, usually within a range of 10 meters. It utilizes a 5mm square chip and a 1mm wide antenna burning less than 2m watts of power to transmit data wirelessly over short distance, much like Bluetooth.

Gi-Fi will helps to push wireless communications to faster drive. For many years cables ruled the world. Optical fibers played a dominant role for its higher bit rates and faster transmission. But the installation of cables caused a greater difficulty and thus led to wireless access. The foremost of this is Bluetooth which can cover 9-10mts. Wi-Fi followed it having coverage area of 91mts. No doubt, introduction of Wi-Fi wireless networks has proved a revolutionary solution to “last mile” problem. However, the standard’s original limitations for data exchange rate and range, number of changes, high cost of the infrastructure have not yet made it possible for Wi-Fi to become a total threat to cellular networks on the one hand, and hard-wire networks, on the other. But the man’s continuous quest for even better technology despite the substantial advantages of present technologies led to the introduction of new, more up-to-date standards for data exchange rate i.e., Gi-Fi.

The development will enable the truly wireless office and home of the future. As the integrated transceiver is extremely small, it can be embedded into devices. The breakthrough will mean the networking of office and home equipment without wires will finally become a reality.
In this book we present a low cost, low power and high broadband chip, which will be vital in enabling the digital economy of the future.

Tele-Graffiti

Added on: March 15th, 2012 by No Comments

There are several ways of building a remote sketching system. One way is to use a tablet and a stylus to input the sketch, and a computer monitor to display the sketch at the remote site. Such systems have a number of disadvantages. Writing with a stylus on a glass tablet is unnatural compared to sketching with a regular pen and paper. Shading and other effects are harder to achieve. Changing color means using the computer to select a new color. Incorporating existing hard-copy documents such as a graded exam is impossible.

Another way of building a remote sketching system is to use a video camera to image the sketch at one end, transmit the captured video to the other end, and displays it there using an LCD projector. See Figure 1 for a schematic diagram of how such a system might operate. The first such camera-projector based remote sketching system was Pierre Wellner’s Xerox “Double Digital Desk”

Strain Guage

Added on: March 15th, 2012 by No Comments

A strain gauge (also strain gage) is a device used to measure the strain of an object. Invented by Edward E. Simmons and Arthur C. Ruge in 1938, the most common type of strain gauge consists of an insulating flexible backing which supports a metallic foil pattern. The gauge is attached to the object by a suitable adhesive, such as cyanoacrylate. As the object is deformed, the foil is deformed, causing its electrical resistance to change. This resistance change, usually measured using a Wheatstone bridge, is related to the strain by the quantity known as the gauge factor.

ARM Processor

Added on: March 14th, 2012 by 1 Comment

An ARM processor is any of several 32-bit RISC (reduced instruction set computer) microprocessor s developed by Advanced RISC Machines, Ltd. The ARM architecture was originally conceived by Acorn Computers Ltd. in the 1980s. Since then, it has evolved into a family of microprocessors extensively used in consumer electronic devices such as mobile phone s, multimedia players, pocket calculator s and PDA s (personal digital assistants).
ARM processor features include:

  • Load/store architecture
  • An orthogonal instruction set
  • Mostly single-cycle execution
  • A 16×32-bit register
  • Enhanced power-saving design

ARM provides developers with intellectual property (IP) solutions in the form of processors, physical IP, cache and SoC designs, application-specific standard products (ASSPs), related software and development tools — everything you need to create an innovative product design based on industry-standard components that are ‘next generation’ compatible.

Zigbee Technology

Added on: March 14th, 2012 by No Comments

ZigBee is a communication standard that provides a short-range coast effective networking capability. It has bee developed with the emphasis on low-coast battery powered application such as building automation industrial and commercial control etc. Zigbee has been introduced by the IEEE and the zigbee alliance to provide a first general standard for these applications. The IEEE is the Institute of Electrical and Electronics Engineers. They are a non-profit organization dedicated to furthering technology involving electronics and electronic devices. The 802 group is the section of the IEEE involved in network operations and technologies, including mid-sized networks and local networks. Group 15 deals specifically with wireless networking technologies, and includes the now ubiquitous 802.15.1 working group, which is also known as Bluetooth.

The name “ZigBee” is derived from the erratic zigging patterns many bees make between flowers when collecting pollen. This is evocative of the invisible webs of connections existing in a fully wireless environment. The standard itself is regulated by a group known as the ZigBee Alliance, with over 150 members worldwide.
While Bluetooth focuses on connectivity between large packet user devices, such as laptops, phones, and major peripherals, ZigBee is designed to provide highly efficient connectivity between small packet devices. As a result of its simplified operations, which are one to two full orders of magnitude less complex than a comparable Bluetooth device, pricing for ZigBee devices is extremely competitive, with full nodes available for a fraction of the cost of a Bluetooth node.

ZigBee devices are actively limited to a through-rate of 250Kbps, compared to Bluetooth’s much larger pipeline of 1Mbps, operating on the 2.4 GHz ISM band, which is available throughout most of the world.
ZigBee has been developed to meet the growing demand for capable wireless networking between numerous low-power devices. In industry ZigBee is being used for next generation automated manufacturing, with small transmitters in every device on the floor, allowing for communication between devices to a central computer. This new level of communication permits finely-tuned remote monitoring and manipulation. In the consumer market ZigBee is being explored for everything from linking low-power household devices such as smoke alarms to a central housing control unit, to centralized light controls.

The specified maximum range of operation for ZigBee devices is 250 feet (76m), substantially further than that used by Bluetooth capable devices, although security concerns raised over “sniping” Bluetooth devices remotely, may prove to hold true for ZigBee devices as well.

Due to its low power output, ZigBee devices can sustain themselves on a small battery for many months, or even years, making them ideal for install-and-forget purposes, such as most small household systems. Predictions of ZigBee installation for the future, most based on the explosive use of ZigBee in automated household tasks in China, look to a near future when upwards of sixty ZigBee devices may be found in an average American home, all communicating with one another freely and regulating common tasks seamlessly.

Plastic Welding

Added on: March 14th, 2012 by 2 Comments

Plastic welding and spot welding – both are almost similar to each other. There is a difference noted. In plastic welding, heat is supplied through convection of the pincher tips, instead of conduction. The two plastic pieces are brought together. At the time of welding, a jet of hot air is liberated. This melts the parts to be joined along with the plastic filler rod. As the rod starts melting, it is forced into the joint and causes the fusion of the parts.

Plastic identification is the first point to be noted in order to choose a suitable plastic welding rod. A plastic welding rod or thermoplastic welding rod is of a constant cross-section shape. Using this, two plastic pieces can be joined. It may have a circular or triangular cross-section. Porosity of the plastic welding rod is an important factor. Air bubbles in the rod will be created due to its high porosity. This is responsible for decreasing the quality of the welding. So, the rods used must maintain zero porosity. Otherwise, they should be void less. Products like chemical tanks, water tanks, heat exchangers and plumbing fittings are manufactured by using the technique of plastic welding. By adopting this technique, money can be saved.

Using plastic welding, two plastics can be welded together. This type of weld is performed on children’s toys, lawn furniture, auto parts and other plastic equipments which are used daily – both for domestic and commercial purposes. In order to join the thermoplastics, when they are heated an under a particular pressure, this type of welding is employed. In normal practice, using filler material, the pieces are joined together. There are certain occasions wherein filler material can be avoided. Generally, plastic is not durable and has a shorter life span. Natural elements like cold weather, ultraviolet radiation from the sun or continuous exposure to chemicals causing contamination, will create damage to plastic products. Plastic can be subjected to damage if it is hit on a hard surface. But, as the price of new parts is high, it is preferred to repair the existing products.

As there are different types of plastics, we must know which one we are working with in order to find the exact welding material to be used. We must know the difference between thermoplastics and thermo sets because it is not possible to weld thermo sets. If you use the wrong welding rod for the plastic to be repaired, bonding will not take place. As materials like polyolefin’s have a lower surface energy, a special group of polyolefin adhesives has to be used. When you are repairing plastic, there are usually two types of defects – a crack or a broken part. In the case of a crack, there is a particular stress affecting the inside of the material. You have to repair the crack and you should not continue through the piece.

Wibree

Added on: March 14th, 2012 by No Comments

Now that wireless connections are established solutions in various sectors of consumer electronics, the question arises whether devices that draw long life from a small battery could find benefit as well in a global standard for wireless low energy technology. Makers of sensors for sports, health and fitness devices have dabbled in wireless but not together, while manufacturers of products like watches have never even considered adding wireless functionality because no options were available. Several wireless technologies have tried to address the needs of the button cell battery market, but most were proprietary and garnered little industry support. However, none of these technologies let smaller manufacturers plug in to a global standard that provides a viable link with devices like mobile phones and laptops.

However, companies that wants to make their small devices wireless need to build and sell either a dedicated display unit or an adapter that connects to a computing platform such as a mobile phone, PC or iPod. There have been few successful products that followed this route to a mass market. A new flavor of Bluetooth technology may be just the answer, and a more efficient alternative for yet another wireless standard.

In the ten years since engineers from a handful of companies came together to create the first Bluetooth specification, Bluetooth technology has become a household term, a globally recognized standard for connecting portable devices. The Bluetooth brand ranks among the top ingredient technology brands worldwide, recognized by a majority of consumers around the world. A thriving global industry of close to 11,000 member companies now designs Bluetooth products and works together to develop future generations of the technology, found in well over 50 percent of mobile phones worldwide and with more than two billion devices shipped to date. Bluetooth wireless technology has established the standard for usability, ease of setup and compatibility across all manufacturers. A well-established set of Bluetooth profiles define the communication needs for a wide range of applications, making it easy for a manufacturer to add Bluetooth wireless connectivity to new devices — from phones to headsets to printers — with a minimum of programming and testing work.

Pervasive Computing

Added on: March 14th, 2012 by No Comments

Pervasive computing refers to embedding computers and communication in our environment. This provides an attractive vision for the future of computing. The idea behind the pervasive computing is to make the computing power disappear in the environment, but will always be there whenever needed or in other words it means availability and invisibility. These invisible computers won’t have keyboards or screens, but will watch us, listen to us and interact with us. Pervasive computing makes the computer operate in the messy and unstructured world of real people and real objects. Distributed devices in this environment must have the ability to dynamically discover and integrate other devices. The prime goal of this technology is to make human life more simple, safe and efficient by using the ambient intelligence of computers.

Holographic Memory

Added on: March 14th, 2012 by No Comments

Devices that use light to store and read data have been the backbone of data storage for nearly two decades. Compact discs revolutionized data storage in the early 1980s, allowing multi-megabytes of data to be stored on a disc that has a diameter of a mere 12 centimeters and a thickness of about 1.2 millimeters. In 1997, an improved version of the CD, called a digital versatile disc (DVD), was released, which enabled the storage of full-length movies on a single disc.

CDs and DVDs are the primary data storage methods for music, software, personal computing and video. A CD can hold 783 megabytes of data. A double-sided, double-layer DVD can hold 15.9 GB of data, which is about eight hours of movies. These conventional storage mediums meet today’s storage needs, but storage technologies have to evolve to keep pace with increasing consumer demand. CDs, DVDs and magnetic storage all store bits of information on the surface of a recording medium. In order to increase storage capabilities, scientists are now working on a new optical storage method called holographic memory that will go beneath the surface and use the volume of the recording medium for storage, instead of only the surface area. Three-dimensional data storage will be able to store more information in a smaller space and offer faster data transfer times.

Holographic memory is developing technology that has promised to revolutionalise the storage systems. It can store data upto 1 Tb in a sugar cube sized crystal. Data from more than 1000 CDs can fit into a holographic memory System. Most of the computer hard drives available today can hold only 10 to 40 GB of data, a small fraction of what holographic memory system can hold. Conventional memories use only the surface to store the data. But holographic data storage systems use the volume to store data. It has more advantages than conventional storage systems. It is based on the principle of holography.

Scientist Pieter J. van Heerden first proposed the idea of holographic (three-dimensional) storage in the early 1960s. A decade later, scientists at RCA Laboratories demonstrated the technology by recording 500 holograms in an iron-doped lithium-niobate crystal and 550 holograms of high-resolution images in a light-sensitive polymer material. The lack of cheap parts and the advancement of magnetic and semiconductor memories placed the development of holographic data storage on hold.

IRIS Recognition

Added on: March 12th, 2012 by 1 Comment

Iris recognition is an automated method of capturing a person’s unique biological data that distinguishes him or her from another individual. It has emerged as one of the most powerful and accurate identification techniques in the modern world. It has proven to be most fool proof technique for the identification of individuals without the use of cards, PINs and passwords. It facilitates automatic identification where by electronic transactions or access to places, information or accounts are made easier, quicker and more secure.

A method for rapid visual recognition of personal identity is described, based on the failure of statistical test of independence. The most unique phenotypic feature visible in a person’s face is the detailed texture of each eye’s iris: an estimate of its statistical complexity in a sample of the human population reveals variation corresponding to several hundred independent degrees-of-freedom. Morphogenetic randomness in the texture expressed phenotypically in the iris trabeclar meshwork ensures that a test of statistical independence on two coded patterns organizing from different eyes is passed almost certainly, whereas the same test is failed almost certainly when the compared codes originate from the same eye. The visible texture of a person’s iris in a real time video image is encoded into a compact sequence of multi-scale quadrature 2-D Gabor wavelet coefficients, whose most significant bits comprise a 512 – byte “IRIS–CODE” statistical decision theory generates identification decisions from Exclusive-OR comparisons of complete iris code at the rate of 4,000 per second, including calculation of decision confidence levels. The distributions observed empirically in such comparisons imply a theoretical “cross-over” error rate of one in 1,31,000 when a decision criterion is adopted that would equalize the False Accept and False Reject error rates.

Touch Screen

Added on: March 12th, 2012 by No Comments

A touch screen is computer display screen that is sensitive to human touch, allowing a user to interact with the computer by touching pictures or words on the screen. Touch screen are used with information kiosks (an interactive computer terminal available for public use, as one with internet access or site specific information), computer based training devices, and system designed to help individuals who have difficulty in manipulating a mouse or keyboard. This technology can be used as an alternative user interface with application that normally requires a mouse, such as a web browser. Some applications are designed specifically for touch screen technology, often having larger icon and link than typical PC application. Monitors are available with built in touch screen kit.

A touch screen kit includes a touch screen panel, a controller, and a software driver. These panels are a clear panel attached externally to the monitors that plug in to a serial or a universal serial Bus (USB) port a bus Card installed in side the computer. The touch screen panel registers touch event and passes these signal to controller. The controller then processes the signals and sends the data to the processor. The software driver translates the touch events into mouse events. Driver can be provided for both Window and Macintosh operating systems. Internal touch screen kits are available but require professional installation because the must be installed inside the monitors.

Virtual Retinal Display

Added on: March 12th, 2012 by No Comments

The Virtual Retinal Display (VRD) is a personal display device under development at the University of Washington’s Human Interface Technology Laboratory in Seattle, Washington USA. The VRD scans light directly onto the viewer’s retina. The viewer perceives a wide field of view image. Because the VRD scans light directly on the retina, the VRD is not a screen based technology.

The VRD was invented at the University of Washington in the Human Interface Technology Lab (HIT) in 1991. The development began in November 1993. The aim was to produce a full color, wide field-of-view, high resolution, high brightness, low cost virtual display. Microvision Inc. has the exclusive license to commercialize the VRD technology. This technology has many potential applications, from head-mounted displays (HMDs) for military/aerospace applications to medical society.

The VRD projects a modulated beam of light (from an electronic source) directly onto the retina of the eye producing a rasterized image. The viewer has the illusion of seeing the source image as if he/she stands two feet away in front of a 14-inch monitor. In reality, the image is on the retina of its eye and not on a screen. The quality of the image he/she sees is excellent with stereo view, full color, wide field of view, no flickering characteristics.

Interplanetary Internet

Added on: March 12th, 2012 by No Comments

Increasingly, network applications must communicate with counterparts across disparate networking environments characterized by significantly different sets of physical and operational constraints; wide variations in transmission latency are particularly troublesome. The proposed Interplanetary Internet (IPN), which must encompass both terrestrial and interplanetary links, is an extreme case. An architecture based on a protocol that can operate successfully and reliably in multiple disparate environments would simplify the development and deployment of such applications. The Internet protocols are ill suited for this purpose. They are, in general, poorly suited to operation on paths in which some of the links operate intermittently or over extremely long propagation delays. The principle problem is reliable transport, but the operations of the Internet’s routing protocols would also raise troubling issues.

It is this analysis that leads us to propose architecture based on Internet-independent middleware: use exactly those protocols at all layers that are best suited to operation within each environment, but insert a new overlay network protocol between the applications and the locally optimized stacks. This new protocol layer, called the bundle layer, ties together the region-specific lower layers so that application programs can communicate across multiple regions.
The DTN architecture implements store-and-forward message switching.

A DTN is a network of regional networks, where a regional network is a network that is adapted to a particular communication region, wherein communication characteristics are relatively homogeneous. Thus, DTNs support interoperability of regional networks by accommodating long delays between and within regional networks, and by translating between regional communication characteristics.

Wibree Technology

Added on: March 11th, 2012 by No Comments

Now that wireless connections are established solutions in various sectors of consumer electronics, the question arises whether devices that draw long life from a small battery could find benefit as well in a global standard for wireless low energy technology. Makers of sensors for sports, health and fitness devices have dabbled in wireless but not together, while manufacturers of products like watches have never even considered adding wireless functionality because no options were available. Several wireless technologies have tried to address the needs of the button cell battery market, but most were proprietary and garnered little industry support. However, none of these technologies let smaller manufacturers plug in to a global standard that provides a viable link with devices like mobile phones and laptops.

However, companies that wants to make their small devices wireless need to build and sell either a dedicated display unit or an adapter that connects to a computing platform such as a mobile phone, PC or iPod. There have been few successful products that followed this route to a mass market. A new flavor of Bluetooth technology may be just the answer, and a more efficient alternative for yet another wireless standard.

Self Managing Computing

Added on: March 11th, 2012 by 1 Comment

Self managing computing helps address the complexity issues by using technology to manage technology. The idea is not new many of the major players in the industry have developed and delivered products based on this concept. Self managing computing is also known as autonomic computing.

Autonomic Computing is an initiative started by IBM in 2001. Its ultimate aim is to develop computer systems capable of self-management, to overcome the rapidly growing complexity of computing systems management, and to reduce the barrier that complexity poses to further growth. In other words, autonomic computing refers to the self-managing characteristics of distributed computing resources, adapting to unpredictable changes while hiding intrinsic complexity to operators and users.

The term autonomic is derived from human biology. The autonomic nervous system monitors your heartbeat, checks your blood sugar level and keeps your body temperature close to 98.6°F, without any conscious effort on your part. In much the same way, self managing computing components anticipate computer system needs and resolve problems with minimal human intervention.

Self managing computing systems have the ability to manage themselves and dynamically adapt to change in accordance with business policies and objectives. Self-managing systems can perform management activities based on situations they observe or sense in the IT environment. Rather than IT professionals initiating management activities, the system observes something about itself and acts accordingly. This allows the IT professional to focus on high-value tasks while the technology manages the more mundane operations. Self managing computing can result in a significant improvement in system management efficiency, when the disparate technologies that manage the environment work together to deliver performance results system wide.

However, complete autonomic systems do not yet exist. This is not a proprietary solution. It’s a radical change in the way businesses, academia, and even the government design, develop, manage and maintain computer systems. Self managing computing calls for a whole new area of study and a whole new way of conducting business.

Self managing computing is the self-management of e-business infrastructure, balancing what is managed by the IT professional and what is managed by the system. It is the evolution of e-business.

Virtual Keyboard

Added on: March 11th, 2012 by No Comments

Virtual Keyboard is just another example of today’s computer trend of smaller and faster’. Computing is now not limited to desktops and laptops, it has found its way into mobile devices like palm tops and even cell phones. But what has not changed for the last 50 or so odd years is the input device, the good old QWERTY keyboard. The virtual keyboard technology is the latest development.

The virtual keyboard technology uses sensor technology and artificial intelligence to let users work on any flat surface as if it were a keyboard. Virtual Keyboards lets you easily create multilingual text content on almost any existing platform and output it directly to PDAs or even web pages. Virtual Keyboard, being a small, handy, well-designed and easy to use application, turns into a perfect solution for cross platform text input.
The main features are: platform-independent multilingual support for keyboard text input, built-in language layouts and settings, copy/paste etc. operations support just as in a regular text editor, no change in already existing system language settings, easy and user-friendly interface and design, and small file size.

The report first gives an overview of the QWERTY keyboards and the difficulties arising from using them. It then gives a description about the virtual keyboard technology and the various types of virtual keyboards in use. Finally the advantages, drawbacks and the applications are discussed.

Biocomputers

Added on: March 11th, 2012 by 2 Comments

Biocomputing is one of the upcoming field in the areas of mole-cularelectronics and nanotechnology. The idea behind blending biology with technology is due to the limitations faced by the semiconductor designers in decreasing the size of the silicon chips, which directly affects the processor speed. Biocomputers consists of biochips unlike the normal computers, which are silicon-based computers. This biochip consists of biomaterial such as nucleic acid, enzymes, etc.

The power of a biocomputer is that it acts as a massively parallel computer and has immense data storage capability.Thus, it can be used to solve NP-complete problems with higher efficiency.The possibilities for bio-computers include developing a credit-card size computer that could design a super-efficient global air-traffic control system. The basic idea behind bio-computing is to use molecular reactions for computational purposes.Biocomputing is one of the upcoming field in the areas of mole-cularelectronics and nanotechnology.

The idea behind blending biology with technology is due to the limitations faced by the semiconductor designers in decreasing the size of the silicon chips, which directly affects the processor speed. Biocomputers consists of biochips unlike the normal computers, which are silicon-based computers. This biochip consists of biomaterial such as nucleic acid, enzymes, etc. The power of a biocomputer is that it acts as a massively parallel computer and has immense data storage capability.Thus, it can be used to solve NP-complete problems with higher efficiency.The possibilities for bio-computers include developing a credit-card size computer that could design a super-efficient global air-traffic control system..

Viruses and Worms

Added on: March 11th, 2012 by No Comments

One of the most high pro?e threats to information integrity is the computer virus. In this paper, I am presenting what are viruses, worms, and Trojan horses and their differences, different strategies of virus spreading and case studies of Slammer and Blaster worms.

The internet consists of hundreds of millions of computers distributed around the world. Millions of people use the internet daily, taking full advantage of the available services at both personal and professional levels. The internet connectivity among computers on which the World Wide Web relies, however renders its nodes on easy target for malicious users who attempt to exhaust their resources or damage the data or create a havoc in the network.
Computer Viruses, especially in recent years, have increased dramatically in number. One of the most highpro?le threats to information integrity is the Computer Virus.

Surprisingly, PC viruses have been around for two-thirds of the IBM PC’s lifetime, appearing in 1986. With global computing on the rise, computer viruses have had more visibility in the past few years. In fact, the entertainment industry has helped by illustrating the effects of viruses in movies such as ”Independence Day”, ”The Net”, and ”Sneakers”. Along with computer viruses, computer worms are also increasing day by day. So, there is a need to immunise the internet by creating awareness in the people about these in detail. In this paper I have explained the basic concepts of viruses and worms and how they spread.

The basic organisation of the paper is as follows. In section 2, give some preliminaries: the de?nitions of computer virus, worms, trojan horses, as well as some other malicious programs and also basic characteristics of a virus.

In section 3, detailed description: describe Malicious Code Environments where virus can propagate, Virus/Worm types overview where different types have been explained, and Categories of worm where the different forms of worm is explained in broad sense. In section 4, File Infection Techniques which describe the various methods of infection mechanisms of a virus. In section 5, Steps in Worm Propagation describe the basic steps that a normal worm will follow for propagation.

In section 6 Case studies: two case studies of Slammer worm and blaster worm are discussed.

Stream Control Transmission Protocol

Added on: March 11th, 2012 by No Comments

The Stream Control Transmission Protocol (SCTP) is a new IP transport protocol, existing at an equivalent level as UDP (User Datagram Protocol) and TCP (Transmission Control Protocol), which currently provide transport layer functions to all of the main Internet applications. UDP, RTP, TCP, and SCTP are currently the IETF standards-track transport-layer protocols. Each protocol has a domain of applicability and services it provides, albeit with some overlaps.

Like TCP, SCTP provides a reliable transport service, ensuring that data is transported across the network without error and in sequence. Like TCP, SCTP is a connection-oriented mechanism, meaning that a relationship is created between the endpoints of an SCTP session prior to data being transmitted, and this relationship is maintained until all data transmission has been successfully completed.

Unlike TCP, SCTP provides a number of functions that are considered critical for signaling transport, and which at the same time can provide transport benefits to other applications requiring additional performance and reliability.

By clarifying the situations where the functionality of these protocols is applicable, this document can guide implementers and protocol designers in selecting which protocol to use.

Special attention is given to services SCTP provides which would make a decision to use SCTP the right one.

Design, Analysis, Fabrication and Testing of Composite Leaf Spring

Added on: March 11th, 2012 by No Comments

The subject gives a brief look on the suitability of composite leaf spring on vehicles and their advantages. Efforts have been made to reduce the cost of composite leaf spring to that of steel leaf spring. The achievement of weight reduction with adequate improvement of mechanical properties has made composite a very replacement material for convectional steel. Material and manufacturing process are selected upon on the cost and strength factor. The design method is selected on the basis of mass production.
From the comparative study, it is seen that the composite leaf spring are higher and more economical than convectional leaf spring.

Polymer Fiber Reinforced Concrete Pavements

Added on: March 11th, 2012 by 1 Comment

Road transportation is undoubtedly the lifeline of the nation and its development is a crucial concern. The traditional bituminous pavements and their needs for continuous maintenance and rehabilitation operations points towards the scope for cement concrete pavements. There are several advantages of cement concrete pavements over bituminous pavements. This paper explains on POLYMER FIBRE REINFORCED CONCRETE PAVEMENTS, which is a recent advancement in the field of reinforced concrete pavement design. PFRC pavements prove to be more efficient than conventional RC pavements, in several aspects, which are explained in this paper. The design procedure and paving operations of PFRC are also discussed in detail. A detailed case study of Polyester fiber waste as fiber reinforcement is included and the results of the study are interpreted. The paper also includes a brief comparison of PFRC pavements with conventional concrete pavement. The merits and demerits of PFRC pavements are also discussed. The applications of PFRC in the various construction projects in kerala are also discussed in brief.

ITER

Added on: March 11th, 2012 by No Comments

ITER (originally an acronym of International Thermonuclear Experimental Reactor) is an international nuclear fusion research and engineering project, which is currently building the world’s largest and most advanced experimental tokamak nuclear fusion reactor at Cadarache in the south of France. The ITER project aims to make the long-awaited transition from experimental studies of plasma physics to full-scale electricity-producing fusion power plants. The project is funded and run by seven member entities – the European Union (EU), India, Japan, the People’s Republic of China, Russia, South Korea and the United States. The EU, as host party for the ITER complex, is contributing 45% of the cost, with the other six parties contributing 9% each.

The ITER fusion reactor itself has been designed to produce 500 megawatts of output power for 50 megawatts of input power, or ten times the amount of energy put in. The machine is expected to demonstrate the principle of getting more energy out of the fusion process than is used to initiate it, something that has not been achieved with previous fusion reactors. Construction of the facility began in 2007, and the first plasma is expected in 2019. When ITER becomes operational, it will become the largest magnetic confinement plasma physics experiment in use, surpassing the Joint European Torus. The first commercial demonstration fusion power plant, named DEMO, is proposed to follow on from the ITER project to bring fusion energy to the commercial market.
The same principle of sun is used to produce heat in ITER plant.

Sixth Sense Technology

Added on: March 11th, 2012 by No Comments

Although miniaturized versions of computers help us to connect to the digital world even while we are travelling there aren’t any device as of now which gives a direct link between the digital world and our physical interaction with the real world. Usually the information’s are stored traditionally on a paper or a digital storage device. Sixth sense technology helps to bridge this gap between tangible and non-tangible world. Sixth Sense device is basically a wearable gestural interface that connects the physical world around us with digital information and lets us use natural hand gestures to interact with this information .The sixth sense technology was developed by Pranav Mistry, a PhD student in the Fluid Interfaces Group at the MIT Media Lab. The sixth sense technology has a Web 4.0 view of human and machine interactions. Sixth Sense integrates digital information into the physical world and its objects, making the entire world your computer. It can turn any surface into a touch-screen for computing, controlled by simple hand gestures. It is not a technology which is aimed at changing human habits but causing computers and other machines to adapt to human needs. It also supports multi user and multi touch provisions. Sixth Sense device is a mini-projector coupled with a camera and a cell phone—which acts as the computer and your connection to the Cloud, all the information stored on the web. The current prototype costs around $350. The Sixth Sense prototype is used to implement several applications that have shown the usefulness, viability and flexibility of the system.

Biodiesel

Added on: March 11th, 2012 by 1 Comment

There is growing interest in biodiesel (fatty acid methyl ester or FAME) because of the similarity in its properties when compared to those of diesel fuels. Diesel engines operated on biodiesel have lower emissions of carbon monoxide, unburned hydrocarbons, particulate matter, and air toxics than when operated on petroleum-based diesel fuel. Production of fatty acid methyl ester (FAME) from rapeseed (non edible oil) fatty acid distillate having high free fatty acids (FFA) was investigated in this work. Conditions for transesterification process of rapeseed oil were 1.8 % H2SO4 as catalyst, MeOH/oil of molar ratio 2:0.1 and reaction temperature 65 °C, for a period of 3h. The yield of methyl ester was > 90 % in 1h.

Biodiesel is becoming widely available in most parts of the U.S. and can be substituted for petroleum-based diesel fuel (“petro diesel”) in virtually any standard unmodified diesel engine. Biodiesel offers many advantages over petroleum-based diesel:

  • It is made from domestically produced and renewable agricultural products, mainly vegetable oil or animal fat.
  • It is essentially non-toxic and biodegradable.
  • It has a high flash point (over 300ºF) and is difficult to light on fire with a match.
  • It reduces emissions of many toxic air pollutants.
  • It functions as an excellent fuel lubricant and performs similarly to low-sulfur diesel with regards to power, torque, and fuel consumption.
  • It can greatly reduce carbon emissions.

CRDI Engines

Added on: March 11th, 2012 by No Comments

Compared with petrol, diesel is the lower quality product of petroleum family. Diesel particles are larger and heavier than petrol, thus more difficult to pulverize. Imperfect pulverization leads to more unburnt particles, hence more pollutant, lower fuel efficiency and less power.

Common-rail technology is intended to improve the pulverization process. Conventional direct injection diesel engines must repeatedly generate fuel pressure for each injection. But in the CRDI engines the pressure is built up independently of the injection sequence and remains permanently available in the fuel line. CRDI system that uses an ion sensor to provide real-time combustion data for each cylinder. The common rail upstream of the cylinders acts as an accumulator, distributing the fuel to the injectors at a constant pressure of up to 1600 bar. Here high-speed solenoid valves, regulated by the electronic engine management, separately control the injection timing and the amount of fuel injected for each cylinder as a function of the cylinder’s actual need.

In other words, pressure generation and fuel injection are independent of each other. This is an important advantage of common-rail injection over conventional fuel injection systems as CRDI increases the controllability of the individual injection processes and further refines fuel atomization, saving fuel and reducing emissions. Fuel economy of 25 to 35 % is obtained over a standard diesel engine and a substantial noise reduction is achieved due to a more synchronized timing operation. The principle of CRDi is also used in petrol engines as dealt with the GDI (Gasoline Direct Injection) , which removes to a great extent the draw backs of the conventional carburetors and the MPFI systems.
CRDi stands for Common Rail Direct Injection.

Agent Based Computing

Added on: March 11th, 2012 by No Comments

Agent-based computing represents an exciting new synthesis for both Artificial Intelligence and more generally, Computer Science. It has the potential to improve the theory and the practice of modeling, designing and implementing complex computer systems. Yet, to date, there has been little systematic analysis of what makes the agent-based approach such an appealing and powerful computational model. To rectify this situation, this paper aims to tackle exactly this issue. The standpoint of this analysis is the role of agent-based software in solving complex, real world problems. In particular, it will be argued that the development of robust and scalable software systems requires autonomous agents that can complete their objectives while situated in a dynamic and uncertain environment, that can engage in rich, high-level interactions, and that can operate within flexible organizational structures.

Keywords: autonomous agents, agent-oriented software engineering, complex systems

3D-Doctor

Added on: March 10th, 2012 by 1 Comment

3D-DOCTOR Software is used to extract information from image files to create 3D model. It was developed using object-oriented technology and provides efficient tools to process and analyze 3D images, object boundaries, 3D models and other associated data items in an easy-to-use environment. It does 3D image segmentation, 3D surface modeling, rendering, volume rendering, 3D image processing, disconsolation, registration, automatic alignment, measurements, and many other functions. Software supports both grayscale and color images stored in DICOM, TIFF, Interfile, GIF, JPEG, PNG, BMP, PGM, RAW or other image file formats. 3D-DOCTOR creates 3D surface models and volume rendering from 2D cross-section images in real time on your PC. Leading hospitals, medical schools and research organizations around the world are currently using 3D-DOCTOR.

Biofuels as Blending Components

Added on: March 10th, 2012 by No Comments

The single largest source of energy in India after coal is petroleum, about two third of which is imported. The petroleum derived fuels i.e. Motor gasoline and diesel are the two major fuels extensively used today.

The high dependence on imported source of energy is an issue related to energy security of the country. And combustion of fossil fuels has been recognized as a major cause of air pollution in Indian cities. Although CNG and LPG are being promoted as clauses alternatives but both of them are short in supply and we have to depend on imports to meet the requirements. In the new Petroleum policy passed on 6th October this year though CNG and LPG are promoted but petrol and diesel continue to remain are the 2 major fuels to be used.

We therefore need to look for cleaner alternatives which could not only decrease pollution but also our dependence on other countries. Among the various alternatives biofuels like ethanol and bio-diesel which can be produced from a host of biosource can be easily be used as blending components of both petrol and diesel in existing engines without modifications. Unlike CNG and LPG new infrastructure for supply and distribution of fuel. Further these fuels production will help use surplus agriculture produce and help in rural development. Ethanol is used more in petrol engines, bio diesel finds application in diesel engines. They add oxygen to respective fuels, which in turn improves combustion efficiency and reduces harmful exhaust emissions.

Access Premium Seminar Reports: Subscribe Now



Sign Up for comprehensive seminar reports & presentations: DOCX, PDF, PPTs.