Hello Guest. Sign Up to view and download full seminar reports               


Engineering Topics Category

Ad hoc Networks

Added on: September 25th, 2013 by Afsal Meerankutty No Comments

Recent advances in portable computing and wireless technologies are opening up exciting possibilities for the future of wireless mobile networking. A mobile ad hoc network (MANET) is an autonomous system of mobile hosts connected by wireless links. Mobile networks can be classified into infrastructure networks and mobile ad hoc networks according to their dependence on fixed infrastructures. In an infrastructure mobile network, mobile nodes have wired access points (or base stations) within their transmission range.
The access points compose the backbone for an infrastructure network. In contrast, mobile ad hoc networks are autonomously self-organized networks without infrastructure support. In a mobile ad hoc network, nodes move arbitrarily, therefore the network may experiences rapid and unpredictable topology changes. Additionally, because nodes in a mobile ad hoc network normally have limited transmission ranges, some nodes cannot communicate directly with each other. Hence, routing paths in mobile ad hoc networks potentially contain multiple hops, and every node in mobile ad hoc networks has the responsibility to act as a router.

Mobile ad hoc networks originated from the DARPA Packet Radio Network (PRNet) and SURAN project. Being independent on pre-established infrastructure, mobile ad hoc networks have advantages such as rapid and ease of deployment, improved flexibility and reduced costs.

Axial-Field Electrical Machines

Added on: September 25th, 2013 by Afsal Meerankutty 2 Comments

Axial-field electrical machines offer an alternative to the conventional machines. In the axial-field machine, the air gap flux is axial in direction and the active current carrying conductors are radially positioned. This paper presents the design characteristics, special features, manufacturing aspects and potential applications for axial-field electrical machines. The experimental from several prototypes, including d.c. machines, synchronous machines and single-phase machines are given. The special features of the axial-field machine, such as its planar and adjustable air gap, flat shape, ease of diversification, etc., enable axial-fled machines to have distinct advantages over conventional machines in certain applications, especially in special purpose applications. conventional radial field machines. The axial field electrical machines described in this paper are particularly suitable for d.c and synchronous machines where the double air gaps present no difficulties since these machines normally require fairly large air gaps to reduce the effect of armature reaction. One of the major objections to the use of AFMs lies in the difficulty on cutting the slots in their laminated cores.

Basalt Rock Fibre (BRF)

Added on: September 24th, 2013 by Afsal Meerankutty 3 Comments

Basalt is well known as a rock found in virtually every country round the world. Basalt Rock fibres have no toxic reaction with air or water, are non-combustible and explosion proof. When in contact with other chemicals they produce no chemical reactions that may damage health or the environment. Basalt base composites can replace steel and known reinforced plastics (1 kg of basalt reinforces equals 9.6 kg of steel). There seemed to be something quite poetic in using a fibre made from natural rock to reinforce a material, which might quite reasonably be described as artificial rock. Raw material for producing basalt fibre is a rock of the volcanic origin. Fibres are received by melting basalt stones down at the temperature of 1400?C. Melted basalt mass passes through the platinum bushing and is extended into fibres. Basalt Rock fibre special properties reduce the cost of products whilst improving their performance. Scope: Low cost, high performance basalt fibres offer the potential to solve the largest problem in the cement and concrete industry, cracking and structural failure of concrete.Basalt fibre reinforced concrete could become the leading reinforcement system in the world for minimizing cracking, reducing road wear, improving concrete product life, lowering maintenance and replacement costs, and minimizing concrete industry law suits. It was therefore with considerable interest that use of basalt fibres as a reinforcing material for concrete. We propose here to investigate the usage of Basalt fibers in low cost composites for civil infrastructure applications requiring excellent mechanical support properties and long lifetimes. Because of the higher performance (strength, temperature range, and durability) and lower potential cost predicted for basalt fibers, they have the potential to cost effectively replace fiberglass, steel fiber, polypropylene, polyethylene, polyester, aramid and carbon fiber products in many applications.

Intelligent Transportation System

Added on: March 27th, 2012 by Afsal Meerankutty 2 Comments

National highway ITS has been shifted to “Integrated Road Transportation Management, which incorporates road safety and management technologies, from traditional “Transportation Management”. This paper describes the road management system which is suitable to national highway ITS. When efficient integrated road transportation management system (transportation + road management) is developed by introducing road management system to national highway ITS, reduction in traffic congestion cost, travel time and traffic accident and improvement of road management are expected thanks to integration of road safety and management technology. And based on automobiles and mobile phones distributed in Korea, the creation of new market in telematics and ubiquitous area is highly expected.
Keywords: Advanced Road Management System, Intelligent Transportation System (ITS), IT technology

Light Peak

Added on: March 27th, 2012 by Afsal Meerankutty No Comments

Light Peak is the code name for a new high-speed optical cable technology designed to connect electronic devices to each other. Light Peak delivers high bandwidth starting at 10Gb/s with the potential ability to scale to 100Gb/s over the next decade. At 10Gb/s, we can transfer a full-length Blu-Ray movie in less than 30 seconds. Light peak allows for smaller connectors and longer, thinner, and more flexible cables than currently possible. Light Peak also has the ability to run multiple protocols simultaneously over a single cable, enabling the technology to connect devices such as peripherals, displays, disk drives, docking stations, and more.

Free Space Optics

Added on: March 26th, 2012 by Afsal Meerankutty No Comments

Free space optics (FSO) is a line-of-sight technology that currently enables optical transmission up to 2.5 Gbps of data, voice, and video communications through the air, allowing optical connectivity without deploying fiber optic cables or securing spectrum licenses. FSO system can carry full duplex data at giga bits per second rates over Metropolitan distances of a few city blocks of few kms. FSO, also known as optical wireless, overcomes this last-mile access bottleneck by sending high –bitrate signals through the air using laser transmission.

Security Issues in Grid Computing

Added on: March 25th, 2012 by Afsal Meerankutty No Comments

The last decade has seen a considerable increase in commodity computer and network performance mainly as a result of faster hardware and more sophisticated software. Nevertheless there are still problems, in the fields of science, engineering and business, which cannot be dealt effectively with the current generation of super computers. In fact, due to their size and complexity, these problems are often numerically and/or data intensive and require a variety of heterogeneous resources that are not available from a single machine. A number of teams have conducted experimental studies on the cooperative use of geographically distributed resources conceived as a single powerful computer. The new approach is known by several names such as, Metacomputing, seamless scalable computing, global computing and more recently Grid Computing.

The early efforts in Grid Computing started as a project to link super computing sites, but now it has grown far beyond its original intent. The rapid and impressive growth of internet has become an attractive means of sharing information across the globe. The idea of grid computing has emerged from the fact that, internet can also be used for several other purposes such as sharing the computing power, storage space, scientific devices and software programs. The term “Grid” is chosen as it is analogous to Electrical Power Grid where it provides consistent, pervasive and ubiquitous power irrespective of its source. The main aim of this paper is to present the state- of-the-art and issues in Grid computing.

This paper aims to present the state-of-the-art of Grid computing and attempts to survey the major international efforts in developing this emerging technology

Intelligent Cooling System

Added on: March 25th, 2012 by Afsal Meerankutty 3 Comments

In the present paper, efforts have been made to highlight the concept of an “INTELLIGENT COOLING SYSTEM”. The basic principle behind this is to control the flow rate of coolant by regulating the valve by implementing FUZZY LOGIC.

In conventional process the flow rate is constant over the entire engine jacket. This induces thermal stresses & reduction in efficiency.

The “INTELLIGENT COOLING SYSTEM” i.e implementation of fuzzy logic will overcome the above stated drawbacks in any crisp situation. The flow rate of coolant will be controlled by control unit & intelligent sensors.

This is a concept and an innovative idea not been implemented yet.

Optical Fibers

Added on: March 25th, 2012 by Afsal Meerankutty 1 Comment

An optical fiber (or fibre) is a glass or plastic fiber that carries light along its length. Fiber optics is the overlap of applied science and engineering concerned with the design and application of optical fibers. Optical fibers are widely used in fiber-optic communications, which permits transmission over longer distances and at higher bandwidths (data rates) than other forms of communications. Fibers are used instead of metal wires because signals travel along them with less loss, and they are also immune to electromagnetic interference. Fibers are also used for illumination, and are wrapped in bundles so they can be used to carry images, thus allowing viewing in tight spaces. Specially designed fibers are used for a variety of other applications, including sensors and fiber lasers.

Light is kept in the core of the optical fiber by total internal reflection. This causes the fiber to act as a waveguide. Fibers which support many propagation paths or transverse modes are called multi-mode fibers (MMF), while those which can only support a single mode are called single-mode fibers (SMF). Multi-mode fibers generally have a larger core diameter, and are used for short-distance communication links and for applications where high power must be transmitted. Single-mode fibers are used for most communication links longer than 550 meters (1,800 ft).

Optical Fiber Communication System

Added on: March 25th, 2012 by Afsal Meerankutty No Comments

Communication is an important part of our daily life. The communication process involves information generation, transmission, reception and interpretation. As needs for various types of communication such as voice, images, video and data communications increase demands for large transmission capacity also increase. This need for large capacity has driven the rapid development of light wave technology; a worldwide industry has developed. An optical or light wave communication system is a system that uses light waves as the carrier for transmission. An optical communication system mainly involves three parts. Transmitter, receiver and channel. In optical communication transmitters are light sources, receivers are light detectors and the channels are optical fibers. In optical communication the channel i.e, optical fibers play an important role because it carries the data from transmitter to the receiver. Hence, here we shall discuss mainly about optical fibers.

Pneumatic Control System

Added on: March 24th, 2012 by Afsal Meerankutty No Comments

Control system utilize pressure differential created by gas source to drive the transfer of material. Pneumatic control system are all about using compressed air to operate and power a system air taken from the atmosphere and squeezed are compressed .This compressed air is then used a pneumatic system to do work . Pneumatic system are used in Meany field such as lorry brake, bicycle tyres, car tyres, paint spraying aircraft and hydraulic system In this paper, we propose an intelligent control method for a pneumatic servo nonlinear system with static friction. The real machine experiment confirmed the improvement of the speed of response and the stop accuracy and the effectiveness of the proposed method.

Pulse Detonation Engine

Added on: March 24th, 2012 by Afsal Meerankutty No Comments

Rocket engines that work much like an automobile engine are being developed at NASA’s Marshall Space Flight Center in Huntsville, Ala. Pulse detonation rocket engines offer a lightweight, low-cost alternative for space transportation. Pulse detonation rocket engine technology is being developed for upper stages that boost satellites to higher orbits. The advanced propulsion technology could also be used for lunar and planetary Landers and excursion vehicles that require throttle control for gentle landings.

The engine operates on pulses, so controllers could dial in the frequency of the detonation in the “digital” engine to determine thrust. Pulse detonation rocket engines operate by injecting propellants into long cylinders that are open on one end and closed on the other. When gas fills a cylinder, an igniter—such as a spark plug—is activated. Fuel begins to burn and rapidly transitions to a detonation, or powered shock. The shock wave travels through the cylinder at 10 times the speed of sound, so combustion is completed before the gas has time to expand. The explosive pressure of the detonation pushes the exhaust out the open end of the cylinder, providing thrust to the vehicle.
A major advantage is that pulse detonation rocket engines boost the fuel and oxidizer to extremely high pressure without a turbo pump—an expensive part of conventional rocket engines. In a typical rocket engine, complex turbo pumps must push fuel and oxidizer into the engine chamber at an extremely high pressure of about 2,000 pounds per square inch or the fuel is blown back out.

The pulse mode of pulse detonation rocket engines allows the fuel to be injected at a low pressure of about 200 pounds per square inch. Marshall Engineers and industry partners United Technology Research Corp. of Tullahoma, Tenn. and Adroit Systems Inc. of Seattle have built small-scale pulse detonation rocket engines for ground testing. During about two years of laboratory testing, researchers have demonstrated that hydrogen and oxygen can be injected into a chamber and detonated more than 100 times per second.

NASA and its industry partners have also proven that a pulse detonation rocket engine can provide thrust in the vacuum of space. Technology development now focuses on determining how to ignite the engine in space, proving that sufficient amounts of fuel can flow through the cylinder to provide superior engine performance, and developing computer code and standards to reliably design and predict performance of the new breed of engines.
A developmental, flight-like engine could be ready for demonstration by 2005 and a full-scale, operational engine could be finished about four years later. Manufacturing pulse detonation rocket engines is simple and inexpensive. Engine valves, for instance, would likely be a sophisticated version of automobile fuel injectors. Pulse detonation rocket engine technology is one of many propulsion alternatives being developed by the Marshall Center’s Advanced Space Transportation Program to dramatically reduce the cost of space transportation.

Humanoid Robot

Added on: March 23rd, 2012 by Afsal Meerankutty 3 Comments

The field of humanoids robotics is widely recognized as the current challenge for robotics research .The humanoid research is an approach to understand and realize the complex real world interactions between a robot, an environment, and a human. The humanoid robotics motivates social interactions such as gesture communication or co-operative tasks in the same context as the physical dynamics. This is essential for three-term interaction, which aims at fusing physical and social interaction at fundamental levels.

People naturally express themselves through facial gestures and expressions. Our goal is to build a facial gesture human-computer interface fro use in robot applications. This system does not require special illumination or facial make-up. By using multiple Kalman filters we accurately predict and robustly track facial features. Since we reliably track the face in real-time we are also able to recognize motion gestures of the face. Our system can recognize a large set of gestures (13) ranging from “yes”, ”no” and “may be” to detecting winks, blinks and sleeping.

Solar Power Satellites

Added on: March 22nd, 2012 by Afsal Meerankutty 5 Comments

The new millennium has introduced increased pressure for finding new renewable energy sources. The exponential increase in population has led to the global crisis such as global warming, environmental pollution and change and rapid decrease of fossil reservoirs. Also the demand of electric power increases at a much higher pace than other energy demands as the world is industrialized and computerized. Under these circumstances, research has been carried out to look into the possibility of building a power station in space to transmit electricity to Earth by way of radio waves-the Solar Power Satellites. Solar Power Satellites(SPS) converts solar energy in to micro waves and sends that microwaves in to a beam to a receiving antenna on the Earth for conversion to ordinary electricity.SPS is a clean, large-scale, stable electric power source. Solar Power Satellites is known by a variety of other names such as Satellite Power System, Space Power Station, Space Power System, Solar Power Station, Space Solar Power Station etc. One of the key technologies needed to enable the future feasibility of SPS is that of Microwave Wireless Power Transmission.WPT is based on the energy transfer capacity of microwave beam i.e; energy can be transmitted by a well focused microwave beam. Advances in Phased array antennas and rectennas have provided the building blocks for a realizable WPT system.

Geothermal Energy

Added on: March 21st, 2012 by Afsal Meerankutty No Comments

The word geothermal comes from the Greek words geo (earth) and therme ( heat). So, geothermal energy is heat from within the earth. We can use the steam and hot water produced inside the earth to heat buildings or generate electricity. Geothermal energy is a renewable energy source because the water is replenished by rainfall and the heat is continuously produced in the earth.

Historically , the first application of geothermal energy were for space heating , cooking and medical purposes . The earliest record of space heating dates back to 1300 in Iceland .In the early 1800s , geothermal energy was used on what was then a large scale by the conte Franceso de Laderel to recover boric acid . The first mechanical conversion was in 1897 when the steam of the field at Larderallo, Italy , was used to heat a boiler producing steam which drove a small steam engine . The first attempt to produce electricity also took place at Larderello in 1904 with an electricity generator that powered four light bulbs . This was followed in 1912 by a condensating turbine ; and by 1914, 8.5 MW of electricity was being produced . By 1944 larderello was producing 127MW. The Plant was destroyed near end of World war 2, but was fortunately rebuilt and expanded evevtually reached 360 MW in 1981.


Added on: March 20th, 2012 by Afsal Meerankutty No Comments

Secure Socket Layer (SSL) denotes the predominant security protocol of the Internet for World Wide Web (WWW) services relating to electronic commerce or home banking.

The majority of web servers and browsers support SSL as the de-facto standard for secure client-server communication. The Secure Socket Layer protocol builds up point-to-point connections that allow private and unimpaired message exchange between strongly authenticated parties.

In the ISO/OSI reference model [ISO7498], SSL resides in the session layer between the transport layer (4) and the application layer (7); with respect to the Internet family of protocols this corresponds to the range between TCP/IP and application protocols such as HTTP, FTP, Telnet, etc. SSL provides no intrinsic synchronization mechanism; it relies on the data link layer below.

The SSL protocol allows mutual authentication between a client and server and the establishment of an authenticated and encrypted connection. SSL runs above TCP/IP and below HTTP, LDAP, IMAP, NNTP, and other high-level network protocols.

Surface Plasmon Resonance

Added on: March 20th, 2012 by Afsal Meerankutty No Comments

Surface plasmon resonance (SPR) is a phenomenon occurring at metal surfaces(typically gold and silver) when an incident light beam strikes the surface at a particular angle. Depending on the thickness of a molecular layer at the metal surface, the SPR phenomenon results in a graded reduction in intensity of the reflected light. Biomedical applications take advantage of the exquisite sensitivity of SPR to the refractive index of the medium next to the metal surface, which makes it possible to measure accurately the adsorption of molecules on the metal surface an their eventual interactions with specific ligands. The last ten years have seen a tremendous development of SPR use in biomedical applications.

The technique is applied not only to the measurement in real time of the kinetics of ligands receptor interactions and to the screening of lead compounds in the pharmaceutical industry, but also to the measurement DNA hybridization, enzyme- substrate interactions, in polyclonal antibody characterization, epitope mapping, protein conformation studies and label free immunoassays. Conventional SPR is applied in specialized biosensing instruments. These instruments use expensive sensor chips of limited reuse capacity and require complex chemistry for ligand or protein immobilization. Laboratory has successfully applied SPR with colloidal gold particles in buffered solutions. This application offers many advantages over conventional SPR. The support is cheap, easily synthesized, and can be coated with various proteins or protein ligand complexes by charge adsorption. With colloidal gold, the SPR phenomenon can be monitored in any UV spectrophotometer. For high throughput applications we have adapted the technology in an automated clinical chemistry analyzer. This simple technology finds application in label free quantitative immunoassay techniques for proteins and small analytes, in conformational studies with proteins as well as real time association dissociation measurements of receptor ligand interactions for high throughput screening and lead optimization.

Use of Discrete Fiber in Road Construction

Added on: March 19th, 2012 by Afsal Meerankutty 2 Comments

New materials and construction techniques are required to provide Civil Engineering with alternatives to traditional road construction practices. Traditional techniques have not been able to bear the mixed traffic load for a long time. Therefore the pavement requires overlaying. To overcome this problem fiber inclusion in pavements is adopted nowadays. This paper highlights on the use of discrete fiber in road construction. Recently Geosynthetics have been used to reinforce and separate base course material for aggregate-surfaced roads and flexible pavements. Inclusion of discrete fibers increases shear strength and ductility.

Dynamic Speed Governor

Added on: March 19th, 2012 by Afsal Meerankutty 2 Comments

The Dynamic Speed Governor is a system that can be implemented effectively for an efficient and perfectly controlled traffic system. This system can limit the speed of a moving vehicle over an area where the speed has to be restricted and retained within a predetermined value. The Dynamic Speed Governor consists of mainly two parts, the Transmitter section and the Receiver section. The transmitter section is mounted on the signal board, on which the speed limit over that area is indicated. Receiver section is kept inside the vehicle. The radio frequency signal from the transmitter is transmitted and the receiver receives it. If the speed of the vehicle is greater than the speed limit proposed for that particular area, which in turn is transmitted by the transmitter section, the speed governor comes into action and restricts the driver from going beyond that rated speed. If the system detects that the speed of the vehicle has gone beyond the speed limit, a signal is generated from the dynamic speed governor circuit. This signal in turn is used to drive the mechanical part of the vehicle, which closes the fuel nozzle of the vehicle thereby restricting the vehicle from going beyond that speed. In this particular reproduction of the system, to indicate the signal produced, the signal output from the circuit is used to trigger a mono-stable multi-vibrator, which in turn sounds a buzzer.

Tri-Gate Transistors

Added on: March 18th, 2012 by Afsal Meerankutty No Comments

Tri-Gate transistors, the first to be truly three-dimensional, mark a major revolution in the Semiconductor industry. The semiconductor industry continues to push technological innovation to keep pace with Moore’s Law, shrinking transistors so that ever more can be packed on a chip. However, at future technology nodes, the ability to shrink transistors becomes more and more problematic, in part due to worsening short channel effects and an increase in parasitic leakages with scaling of the gate-length dimension. In this regard Tri-gate transistor architecture makes it possible to continue Moore’s law at 22nm and below without a major transistor redesign. The physics, technology and the advantages of the device is briefly discussed in this paper.

Shallow Water Acoustic Networks

Added on: March 18th, 2012 by Afsal Meerankutty No Comments

Shallow water acoustic network s are generally formed by acoustically connected ocean bottom sensor nodes, autonomous underwater vehicles (AUVs), and surface stations that serve as gateways and provide radio communication link s to on -shore stations. The QoS of such network s is limited by the low bandwidth of acoustic transmission channels, high latency resulting from the slow propagation of sound, and elevated noise levels in some environments. The long -term goal in the design of underwater acoustic network s is to provide for a self- configuring network of distributed nodes with network link s that automatically adapt to the environment through selection of the optimum system parameters. Here considers several aspects in the design of shallow water acoustic network s that maximize throughput and reliability while minimizing power consumption And In the last two decades, underwater acoustic communications has experienced significant progress. The traditional approach for ocean -bottom or ocean-column monitoring is to deploy oceanographic sensors, record the data, and recover the instruments. But this approach failed in real -time monitoring. The ideal solution for real -time monitoring of selected ocean areas for long periods of time is to connect various instruments through wireless link s within a network structure. And the Basic underwater acoustic network s are formed by establishing bidirectional acoustic communication between nodes such as autonomous underwater vehicles (AUVs) and fixed sensors. The network is then connected to a surface station, which can further be connected to terrestrial networks such as the Internet.

Under water sensor network, acoustic network, acoustic communication architectures.

Confidential Data Storage and Deletion

Added on: March 18th, 2012 by Afsal Meerankutty No Comments

With the decrease in cost of electronic storage media, more and more sensitive data gets stored in those media. Laptop computers regularly go missing, either because they are lost or because they are stolen. These laptops contain confidential information, in the form of documents, presentations, emails, cached data, and network access credentials. This confidential information is typically far more valuable than the laptop hardware, if it reaches right people. There are two major aspects to safeguard the privacy of data on these storage media/laptops. First, data must be stored in a confidential manner. Second, we must make sure that confidential data once deleted can no longer be restored. Various methods exist to store confidential data such as encryption programs, encryption file system etc. Microsoft BitLocker Drive Encryption provides encryption for hard disk volume and is available with Windows Vista Ultimate and Enterprise editions. This seminar describes the most commonly used encryption algorithm, Advanced Encryption System (AES) which is used for many of the confidential data storage methods. This seminar also describes some of the confidential data erasure methods such as physical destruction, data overwriting methods and Key erasure.

Keywords: Privacy of data, confidential data storage, Encryption, Advanced Encryption Standard (AES), Microsoft Bit Locker, Confidential data erasure, Data overwriting, Key erasure.

Magneto Abrasive Flow Machining

Added on: March 18th, 2012 by Afsal Meerankutty No Comments

Magneto abrasive flow machining (MAFM) is a new technique in machining. The orbital flow machining process has been recently claimed to be another improvement over AFM, which performs three-dimensional machining of complex components. These processes can be classified as hybrid machining processes (HMP)—a recent concept in the advancement of non-conventional machining. The reasons for developing a hybrid machining process is to make use of combined or mutually enhanced advantages and to avoid or reduce some of the adverse effects the constituent processes produce when they are individually applied. In almost all non-conventional machining processes such as electric discharge machining, electrochemical machining, laser beam machining, etc., low material removal rate is considered a general problem and attempts are continuing to develop techniques to overcome it. The present paper reports the preliminary results of an on-going research project being conducted with the aim of exploring techniques for improving material removal (MR) in AFM. One such technique studied uses a magnetic field around the work piece. Magnetic fields have been successfully exploited in the past, such as machining force in magnetic abrasive finishing (MAF), used for micro machining and finishing of components, particularly circular tubes. The process under investigation is the combination of AFM and MAF, and is given the name Magneto Abrasive Flow Machining (MAFM).

Drive-By-Wire Systems

Added on: March 17th, 2012 by Afsal Meerankutty No Comments

The idea of ‘X-by-wire’, the control of a car’s vital functions like steering and breaks by electrical rather than mechanical links, it tantalizing to some and terrifying to other.

Aircraft may have been equipped with Fly-by-wire system for years, but do we really feel comfortable at the idea of computer sitting between ourselves and steering and breaks of our car?

What might, however, at first sound alarming actually has many benefits and whatever we as costumers may think, X-by-wire is already emerging fast. Electronic throttles, the Mercedes sensotronic break controls (SBC) and now BMW’s active front steering (ABS) in the new launches are all examples of drive-by-wire technologies.

  • Mercedes-Benz’s cutting edge sensotronic break control technology, used in new SL, delivers more breaking movement in an emergency and helps stabilize skid early.
  • Delphi’s X-by –wire is a family of advanced breaking, steering, throttle and suspension control system which function without conventional mechanical component.
  • GM’s hydrogen-power X-by-wire concept uses electrical signals instead of mechanical linkages or hydraulics to operate the motor, breaks and steering function.

Highly organized network of wired, sensors, controllers and actuators control the mechanical function of the vehicle’s steering, breaking, throttle and suspension control system.

Electromagnetic Transducer

Added on: March 17th, 2012 by Afsal Meerankutty No Comments

This paper describes a novel electromagnetic
transducer called the Four Quadrant Transducer (4QT) for
hybrid electric vehicles. The system consists of one electrical
machine unit (including two rotors) and two inverters, which
enable the vehicle’s Internal Combustion Engine (ICE) to run at
its optimum working points regarding efficiency, almost
independently of the changing load requirements at the wheels. In
other words the ICE is operated at high torque and low speed as
much as possible. As a consequence, reduced fuel consumption
will be achieved.

The basic structure of the Four Quadrant Transducer system,
simulation results and ideas about suitable topologies for
designing a compact machine unit are reported. The simulated
system of a passenger car is equipped with a single step gearbox
making it simple and cost effective. Since the engine is not
mechanically connected to the wheels and the electrical
components have lower power ratings than the engine itself, the
system takes advantage of the best characteristics of the series-
and the parallel hybrid, respectively. The proposed concept looks
promising and fuel savings of more than 40% compared with a
conventional vehicle can be achieved.

Reactive Powder Concrete

Added on: March 17th, 2012 by Afsal Meerankutty No Comments

Concrete is a critical material for the construction of infrastructure facilities throughout the world. A new material known as reactive powder concrete (RPC) is becoming available that differs significantly from traditional concretes. RPC has no large aggregates, and contains small steel fibers that provide additional strength and in some cases can replace traditional mild steel reinforcement. Due to its high density and lack of aggregates, ultrasonic inspections at frequencies ten to twenty times that of traditional concrete inspections are possible. These properties make it possible to evaluate anisotropy in the material using ultrasonic waves, and thereby measure quantitatively the elastic properties of the material. The research reported in this paper examines elastic properties of this new material as modeled as an orthotropic elastic solid and discusses ultrasonic methods for evaluating Young’s modulus nondestructively. Calculation of shear moduli and Poisson’s ratio based on ultrasonic velocity measurements are also reported. Ultrasonic results are compared with traditional destructive methods.

Threats of HEMP and HPM Devices

Added on: March 16th, 2012 by Afsal Meerankutty No Comments

Electromagnetic Pulse (EMP) is an instantaneous, intense energy field that can overload or disrupt at a distance numerous electrical systems and high technology microcircuits, which are especially sensitive to power surges. A large scale EMP effect can be produced by a single nuclear explosion detonated high in the atmosphere. This method is referred to as High-Altitude EMP (HEMP). A similar, smaller-scale EMP effect can be created using non-nuclear devices with powerful batteries or reactive chemicals. This method is called High Power Microwave (HPM). Several nations, including reported sponsors of terrorism, may currently have a capability to use EMP as a weapon for cyber warfare or cyber terrorism to disrupt communications and other parts of the U.S. critical infrastructure. Also, some equipment and weapons used by the U.S. military may be vulnerable to the effects of EMP.

The threat of an EMP attack against the United States is hard to assess, but some observers indicate that it is growing along with worldwide access to newer technologies and the proliferation of nuclear weapons. In the past, the threat of mutually assured destruction provided a lasting deterrent against the exchange of multiple high-yield nuclear warheads. However, now even a single, specially- designed low-yield nuclear explosion high above the United States, or over a battlefield, can produce a large-scale EMP effect that could result in a widespread loss of electronics, but no direct fatalities, and may not necessarily evoke a large nuclear retaliatory strike by the U.S. military. This, coupled with the possible vulnerability of U.S. commercial electronics and U.S. military battlefield equipment to the effects of EMP, may create a new incentive for other countries to develop or acquire a nuclear capability.

Policy issues raised by this threat include (1) what is the United States doing to protect civilian critical infrastructure systems against the threat of EMP, (2) how does the vulnerability of U.S. civilian and military electronics to EMP attack encourage other nations to develop or acquire nuclear weapons, and (3) how likely are terrorist organizations to launch a smaller-scale EMP attack against the United States?

Electronic Fuel Injection System

Added on: March 15th, 2012 by Afsal Meerankutty 1 Comment

In developed and developing countries considerable emphasis is being laid on the minimization of pollutants from internal combustion engines. A two-stroke cycle engine produces a considerable amount of pollutants when gasoline is used as a fuel due to short-circuiting. These pollutants, which include unburnt hydrocarbons and carbon monoxide, which are harmful to beings. There is a strong need to develop a kind of new technology which could minimize pollution from these engines.

Direct fuel injection has been demonstrated to significantly reduce unburned hydrocarbon emissions by timing the injection of fuel in such way as to prevent the escape of unburned fuel from the exhaust port during the scavenging process.

The increased use of petroleum fuels by automobiles has not only caused fuel scarcities, price hikes, higher import bills, and economic imbalance but also causes health hazards due to its toxic emissions. Conventional fuels used in automobiles emit toxic pollutants, which cause asthma, chronic cough, skin degradation, breathlessness, eye and throat problems, and even cancer.

In recent years, environmental improvement (CO2, NOx and Ozone reduction) and energy issues have become more and more important in worldwide concerns. Natural gas is a good alternative fuel to improve these problems because of its abundant availability and clean burning characteristics.


Added on: March 15th, 2012 by Afsal Meerankutty 6 Comments

Gi-Fi or Gigabit Wireless is the world’s first transceiver integrated on a single chip that operates at 60GHz on the CMOS process. It will allow wireless transfer of audio and video data up to 5 gigabits per second, ten times the current maximum wireless transfer rate, at one-tenth of the cost, usually within a range of 10 meters. It utilizes a 5mm square chip and a 1mm wide antenna burning less than 2m watts of power to transmit data wirelessly over short distance, much like Bluetooth.

Gi-Fi will helps to push wireless communications to faster drive. For many years cables ruled the world. Optical fibers played a dominant role for its higher bit rates and faster transmission. But the installation of cables caused a greater difficulty and thus led to wireless access. The foremost of this is Bluetooth which can cover 9-10mts. Wi-Fi followed it having coverage area of 91mts. No doubt, introduction of Wi-Fi wireless networks has proved a revolutionary solution to “last mile” problem. However, the standard’s original limitations for data exchange rate and range, number of changes, high cost of the infrastructure have not yet made it possible for Wi-Fi to become a total threat to cellular networks on the one hand, and hard-wire networks, on the other. But the man’s continuous quest for even better technology despite the substantial advantages of present technologies led to the introduction of new, more up-to-date standards for data exchange rate i.e., Gi-Fi.

The development will enable the truly wireless office and home of the future. As the integrated transceiver is extremely small, it can be embedded into devices. The breakthrough will mean the networking of office and home equipment without wires will finally become a reality.
In this book we present a low cost, low power and high broadband chip, which will be vital in enabling the digital economy of the future.


Added on: March 15th, 2012 by Afsal Meerankutty No Comments

There are several ways of building a remote sketching system. One way is to use a tablet and a stylus to input the sketch, and a computer monitor to display the sketch at the remote site. Such systems have a number of disadvantages. Writing with a stylus on a glass tablet is unnatural compared to sketching with a regular pen and paper. Shading and other effects are harder to achieve. Changing color means using the computer to select a new color. Incorporating existing hard-copy documents such as a graded exam is impossible.

Another way of building a remote sketching system is to use a video camera to image the sketch at one end, transmit the captured video to the other end, and displays it there using an LCD projector. See Figure 1 for a schematic diagram of how such a system might operate. The first such camera-projector based remote sketching system was Pierre Wellner’s Xerox “Double Digital Desk”

Strain Guage

Added on: March 15th, 2012 by Afsal Meerankutty No Comments

A strain gauge (also strain gage) is a device used to measure the strain of an object. Invented by Edward E. Simmons and Arthur C. Ruge in 1938, the most common type of strain gauge consists of an insulating flexible backing which supports a metallic foil pattern. The gauge is attached to the object by a suitable adhesive, such as cyanoacrylate. As the object is deformed, the foil is deformed, causing its electrical resistance to change. This resistance change, usually measured using a Wheatstone bridge, is related to the strain by the quantity known as the gauge factor.

ARM Processor

Added on: March 14th, 2012 by Afsal Meerankutty 1 Comment

An ARM processor is any of several 32-bit RISC (reduced instruction set computer) microprocessor s developed by Advanced RISC Machines, Ltd. The ARM architecture was originally conceived by Acorn Computers Ltd. in the 1980s. Since then, it has evolved into a family of microprocessors extensively used in consumer electronic devices such as mobile phone s, multimedia players, pocket calculator s and PDA s (personal digital assistants).
ARM processor features include:

  • Load/store architecture
  • An orthogonal instruction set
  • Mostly single-cycle execution
  • A 16×32-bit register
  • Enhanced power-saving design

ARM provides developers with intellectual property (IP) solutions in the form of processors, physical IP, cache and SoC designs, application-specific standard products (ASSPs), related software and development tools — everything you need to create an innovative product design based on industry-standard components that are ‘next generation’ compatible.

Zigbee Technology

Added on: March 14th, 2012 by Afsal Meerankutty No Comments

ZigBee is a communication standard that provides a short-range coast effective networking capability. It has bee developed with the emphasis on low-coast battery powered application such as building automation industrial and commercial control etc. Zigbee has been introduced by the IEEE and the zigbee alliance to provide a first general standard for these applications. The IEEE is the Institute of Electrical and Electronics Engineers. They are a non-profit organization dedicated to furthering technology involving electronics and electronic devices. The 802 group is the section of the IEEE involved in network operations and technologies, including mid-sized networks and local networks. Group 15 deals specifically with wireless networking technologies, and includes the now ubiquitous 802.15.1 working group, which is also known as Bluetooth.

The name “ZigBee” is derived from the erratic zigging patterns many bees make between flowers when collecting pollen. This is evocative of the invisible webs of connections existing in a fully wireless environment. The standard itself is regulated by a group known as the ZigBee Alliance, with over 150 members worldwide.
While Bluetooth focuses on connectivity between large packet user devices, such as laptops, phones, and major peripherals, ZigBee is designed to provide highly efficient connectivity between small packet devices. As a result of its simplified operations, which are one to two full orders of magnitude less complex than a comparable Bluetooth device, pricing for ZigBee devices is extremely competitive, with full nodes available for a fraction of the cost of a Bluetooth node.

ZigBee devices are actively limited to a through-rate of 250Kbps, compared to Bluetooth’s much larger pipeline of 1Mbps, operating on the 2.4 GHz ISM band, which is available throughout most of the world.
ZigBee has been developed to meet the growing demand for capable wireless networking between numerous low-power devices. In industry ZigBee is being used for next generation automated manufacturing, with small transmitters in every device on the floor, allowing for communication between devices to a central computer. This new level of communication permits finely-tuned remote monitoring and manipulation. In the consumer market ZigBee is being explored for everything from linking low-power household devices such as smoke alarms to a central housing control unit, to centralized light controls.

The specified maximum range of operation for ZigBee devices is 250 feet (76m), substantially further than that used by Bluetooth capable devices, although security concerns raised over “sniping” Bluetooth devices remotely, may prove to hold true for ZigBee devices as well.

Due to its low power output, ZigBee devices can sustain themselves on a small battery for many months, or even years, making them ideal for install-and-forget purposes, such as most small household systems. Predictions of ZigBee installation for the future, most based on the explosive use of ZigBee in automated household tasks in China, look to a near future when upwards of sixty ZigBee devices may be found in an average American home, all communicating with one another freely and regulating common tasks seamlessly.

Plastic Welding

Added on: March 14th, 2012 by Afsal Meerankutty 2 Comments

Plastic welding and spot welding – both are almost similar to each other. There is a difference noted. In plastic welding, heat is supplied through convection of the pincher tips, instead of conduction. The two plastic pieces are brought together. At the time of welding, a jet of hot air is liberated. This melts the parts to be joined along with the plastic filler rod. As the rod starts melting, it is forced into the joint and causes the fusion of the parts.

Plastic identification is the first point to be noted in order to choose a suitable plastic welding rod. A plastic welding rod or thermoplastic welding rod is of a constant cross-section shape. Using this, two plastic pieces can be joined. It may have a circular or triangular cross-section. Porosity of the plastic welding rod is an important factor. Air bubbles in the rod will be created due to its high porosity. This is responsible for decreasing the quality of the welding. So, the rods used must maintain zero porosity. Otherwise, they should be void less. Products like chemical tanks, water tanks, heat exchangers and plumbing fittings are manufactured by using the technique of plastic welding. By adopting this technique, money can be saved.

Using plastic welding, two plastics can be welded together. This type of weld is performed on children’s toys, lawn furniture, auto parts and other plastic equipments which are used daily – both for domestic and commercial purposes. In order to join the thermoplastics, when they are heated an under a particular pressure, this type of welding is employed. In normal practice, using filler material, the pieces are joined together. There are certain occasions wherein filler material can be avoided. Generally, plastic is not durable and has a shorter life span. Natural elements like cold weather, ultraviolet radiation from the sun or continuous exposure to chemicals causing contamination, will create damage to plastic products. Plastic can be subjected to damage if it is hit on a hard surface. But, as the price of new parts is high, it is preferred to repair the existing products.

As there are different types of plastics, we must know which one we are working with in order to find the exact welding material to be used. We must know the difference between thermoplastics and thermo sets because it is not possible to weld thermo sets. If you use the wrong welding rod for the plastic to be repaired, bonding will not take place. As materials like polyolefin’s have a lower surface energy, a special group of polyolefin adhesives has to be used. When you are repairing plastic, there are usually two types of defects – a crack or a broken part. In the case of a crack, there is a particular stress affecting the inside of the material. You have to repair the crack and you should not continue through the piece.


Added on: March 14th, 2012 by Afsal Meerankutty No Comments

Now that wireless connections are established solutions in various sectors of consumer electronics, the question arises whether devices that draw long life from a small battery could find benefit as well in a global standard for wireless low energy technology. Makers of sensors for sports, health and fitness devices have dabbled in wireless but not together, while manufacturers of products like watches have never even considered adding wireless functionality because no options were available. Several wireless technologies have tried to address the needs of the button cell battery market, but most were proprietary and garnered little industry support. However, none of these technologies let smaller manufacturers plug in to a global standard that provides a viable link with devices like mobile phones and laptops.

However, companies that wants to make their small devices wireless need to build and sell either a dedicated display unit or an adapter that connects to a computing platform such as a mobile phone, PC or iPod. There have been few successful products that followed this route to a mass market. A new flavor of Bluetooth technology may be just the answer, and a more efficient alternative for yet another wireless standard.

In the ten years since engineers from a handful of companies came together to create the first Bluetooth specification, Bluetooth technology has become a household term, a globally recognized standard for connecting portable devices. The Bluetooth brand ranks among the top ingredient technology brands worldwide, recognized by a majority of consumers around the world. A thriving global industry of close to 11,000 member companies now designs Bluetooth products and works together to develop future generations of the technology, found in well over 50 percent of mobile phones worldwide and with more than two billion devices shipped to date. Bluetooth wireless technology has established the standard for usability, ease of setup and compatibility across all manufacturers. A well-established set of Bluetooth profiles define the communication needs for a wide range of applications, making it easy for a manufacturer to add Bluetooth wireless connectivity to new devices — from phones to headsets to printers — with a minimum of programming and testing work.

Pervasive Computing

Added on: March 14th, 2012 by Afsal Meerankutty No Comments

Pervasive computing refers to embedding computers and communication in our environment. This provides an attractive vision for the future of computing. The idea behind the pervasive computing is to make the computing power disappear in the environment, but will always be there whenever needed or in other words it means availability and invisibility. These invisible computers won’t have keyboards or screens, but will watch us, listen to us and interact with us. Pervasive computing makes the computer operate in the messy and unstructured world of real people and real objects. Distributed devices in this environment must have the ability to dynamically discover and integrate other devices. The prime goal of this technology is to make human life more simple, safe and efficient by using the ambient intelligence of computers.

Holographic Memory

Added on: March 14th, 2012 by Afsal Meerankutty No Comments

Devices that use light to store and read data have been the backbone of data storage for nearly two decades. Compact discs revolutionized data storage in the early 1980s, allowing multi-megabytes of data to be stored on a disc that has a diameter of a mere 12 centimeters and a thickness of about 1.2 millimeters. In 1997, an improved version of the CD, called a digital versatile disc (DVD), was released, which enabled the storage of full-length movies on a single disc.

CDs and DVDs are the primary data storage methods for music, software, personal computing and video. A CD can hold 783 megabytes of data. A double-sided, double-layer DVD can hold 15.9 GB of data, which is about eight hours of movies. These conventional storage mediums meet today’s storage needs, but storage technologies have to evolve to keep pace with increasing consumer demand. CDs, DVDs and magnetic storage all store bits of information on the surface of a recording medium. In order to increase storage capabilities, scientists are now working on a new optical storage method called holographic memory that will go beneath the surface and use the volume of the recording medium for storage, instead of only the surface area. Three-dimensional data storage will be able to store more information in a smaller space and offer faster data transfer times.

Holographic memory is developing technology that has promised to revolutionalise the storage systems. It can store data upto 1 Tb in a sugar cube sized crystal. Data from more than 1000 CDs can fit into a holographic memory System. Most of the computer hard drives available today can hold only 10 to 40 GB of data, a small fraction of what holographic memory system can hold. Conventional memories use only the surface to store the data. But holographic data storage systems use the volume to store data. It has more advantages than conventional storage systems. It is based on the principle of holography.

Scientist Pieter J. van Heerden first proposed the idea of holographic (three-dimensional) storage in the early 1960s. A decade later, scientists at RCA Laboratories demonstrated the technology by recording 500 holograms in an iron-doped lithium-niobate crystal and 550 holograms of high-resolution images in a light-sensitive polymer material. The lack of cheap parts and the advancement of magnetic and semiconductor memories placed the development of holographic data storage on hold.

IRIS Recognition

Added on: March 12th, 2012 by Afsal Meerankutty 1 Comment

Iris recognition is an automated method of capturing a person’s unique biological data that distinguishes him or her from another individual. It has emerged as one of the most powerful and accurate identification techniques in the modern world. It has proven to be most fool proof technique for the identification of individuals without the use of cards, PINs and passwords. It facilitates automatic identification where by electronic transactions or access to places, information or accounts are made easier, quicker and more secure.

A method for rapid visual recognition of personal identity is described, based on the failure of statistical test of independence. The most unique phenotypic feature visible in a person’s face is the detailed texture of each eye’s iris: an estimate of its statistical complexity in a sample of the human population reveals variation corresponding to several hundred independent degrees-of-freedom. Morphogenetic randomness in the texture expressed phenotypically in the iris trabeclar meshwork ensures that a test of statistical independence on two coded patterns organizing from different eyes is passed almost certainly, whereas the same test is failed almost certainly when the compared codes originate from the same eye. The visible texture of a person’s iris in a real time video image is encoded into a compact sequence of multi-scale quadrature 2-D Gabor wavelet coefficients, whose most significant bits comprise a 512 – byte “IRIS–CODE” statistical decision theory generates identification decisions from Exclusive-OR comparisons of complete iris code at the rate of 4,000 per second, including calculation of decision confidence levels. The distributions observed empirically in such comparisons imply a theoretical “cross-over” error rate of one in 1,31,000 when a decision criterion is adopted that would equalize the False Accept and False Reject error rates.

Touch Screen

Added on: March 12th, 2012 by Afsal Meerankutty No Comments

A touch screen is computer display screen that is sensitive to human touch, allowing a user to interact with the computer by touching pictures or words on the screen. Touch screen are used with information kiosks (an interactive computer terminal available for public use, as one with internet access or site specific information), computer based training devices, and system designed to help individuals who have difficulty in manipulating a mouse or keyboard. This technology can be used as an alternative user interface with application that normally requires a mouse, such as a web browser. Some applications are designed specifically for touch screen technology, often having larger icon and link than typical PC application. Monitors are available with built in touch screen kit.

A touch screen kit includes a touch screen panel, a controller, and a software driver. These panels are a clear panel attached externally to the monitors that plug in to a serial or a universal serial Bus (USB) port a bus Card installed in side the computer. The touch screen panel registers touch event and passes these signal to controller. The controller then processes the signals and sends the data to the processor. The software driver translates the touch events into mouse events. Driver can be provided for both Window and Macintosh operating systems. Internal touch screen kits are available but require professional installation because the must be installed inside the monitors.

Virtual Retinal Display

Added on: March 12th, 2012 by Afsal Meerankutty No Comments

The Virtual Retinal Display (VRD) is a personal display device under development at the University of Washington’s Human Interface Technology Laboratory in Seattle, Washington USA. The VRD scans light directly onto the viewer’s retina. The viewer perceives a wide field of view image. Because the VRD scans light directly on the retina, the VRD is not a screen based technology.

The VRD was invented at the University of Washington in the Human Interface Technology Lab (HIT) in 1991. The development began in November 1993. The aim was to produce a full color, wide field-of-view, high resolution, high brightness, low cost virtual display. Microvision Inc. has the exclusive license to commercialize the VRD technology. This technology has many potential applications, from head-mounted displays (HMDs) for military/aerospace applications to medical society.

The VRD projects a modulated beam of light (from an electronic source) directly onto the retina of the eye producing a rasterized image. The viewer has the illusion of seeing the source image as if he/she stands two feet away in front of a 14-inch monitor. In reality, the image is on the retina of its eye and not on a screen. The quality of the image he/she sees is excellent with stereo view, full color, wide field of view, no flickering characteristics.

Support us!

If you like this site please click on any of these buttons!