Hello Guest. Sign Up to view and download full seminar reports               

SEMINAR TOPICS CATEGORY

Engineering Topics Category

Biofuels

Added on: March 10th, 2012 by No Comments

The energy crisis is not just a local or national issue but a global one as well. Various groups are responding to energy needs in particular ways—the U.S. and several other countries are looking into synthetic biology to address these problems. Research in this new field, as it is emerging in the United States, is largely devoted to the production of biofuels. Several institutions, which we will call “biofuels interest groups,” envision this energy source to play a significant role in the remediation of current energy challenges. But what are the current challenges that are motivating these groups to pursue this particular resource through a particular and new science such as synthetic biology? After an examination of four of these interest groups, stationed here in the U.S., we have come to the conclusion that the energy crisis to which each group responds is framed by them in a particular way such that biofuels plays a major, if not the only viable and sustainable, role in the remediation of the problem. These groups claim that synthetic biology offers unique and viable paths toward a sustainable future. We will examine exactly what kinds of future are illustrated by each institution—what they mean by a “sustainable future”—by identifying the views, resources, technologies, and management strategies of each group. In addition we will situate them in their human practices context to view not only what they plan to do, but how and to what extent they will carry out their plan. The groups we present are the Joint Bioengineering Institute (JBEI), Amyris Biotechnologies, and the Energy Biosciences Institute (EBI). In order to assess the feasibility of these models outside of the lab we will present a section which provides an overview of the current socio-political atmosphere in which they must operate. This section examines alternative approaches to the energy crisis, motivations for realizing a certain approach, and the decision making forces at play. Two distinct ideologies are represented by the US Department of Energy (DOE) and Tad Patzek to aid in this discussion.

Medical Image Fusion

Added on: March 10th, 2012 by No Comments

Image fusion is the process by which two or more images are combined into a single image retaining the important features from each of the original images. The fusion of images is often required for images acquired from different instrument modalities or capture techniques of the same scene or objects. Important applications of the fusion of images include medical imaging, microscopic imaging, remote sensing, computer vision, and robotics. Fusion techniques include the simplest method of pixel averaging to more complicated methods such as principal component analysis and wavelet transform fusion. Several approaches to image fusion can be distinguished, depending on whether the images are fused in the spatial domain or they are transformed into another domain, and their transforms fused.

With the development of new imaging sensors arises the need of a meaningful combination of all employed imaging sources. The actual fusion process can take place at different levels of information representation, a generic categorization is to consider the different levels as, sorted in ascending order of abstraction: signal, pixel, feature and symbolic level. This focuses on the so-called pixel level fusion process, where a composite image has to be built of several input images. To date, the result of pixel level image fusion is considered primarily to be presented to the human observer, especially in image sequence fusion (where the input data consists of image sequences). A possible application is the fusion of forward looking infrared (FLIR) and low light visible images (LLTV) obtained by an airborne sensor platform to aid a pilot navigate in poor weather conditions or darkness. In pixel-level image fusion, some generic requirements can be imposed on the fusion result. The fusion process should preserve all relevant information of the input imagery in the composite image (pattern conservation) The fusion scheme should not introduce any artifacts or inconsistencies which would distract the human observer or following processing stages .The fusion process should be shift and rotational invariant, i.e. the fusion result should not depend on the location or orientation of an object the input imagery .In case of image sequence fusion arises the additional problem of temporal stability and consistency of the fused image sequence. The human visual system is primarily sensitive to moving light stimuli, so moving artifacts or time depended contrast changes introduced by the fusion process are highly distracting to the human observer. So, in case of image sequence fusion the two additional requirements apply. Temporal stability: The fused image sequence should be temporal stable, i.e. gray level changes in the fused sequence must only be caused by gray level changes in the input sequences, they must not be introduced by the fusion scheme itself; Temporal consistency: Gray level changes occurring in the input sequences must be present in the fused sequence without any delay or contrast change.

Cellular Neural Networks

Added on: March 10th, 2012 by No Comments

Cellular Neural Networks (CNN) a revolutionary concept and an experimentally proven computing paradigm for analog computers. A standard CNN architecture consists of an m*n rectangular array of cells c(i,j) with Cartesian co-ordinates. Considering inputs and outputs of a cell as binary arguments. It can realize Boolean functions. using this technology, analog computers mimic anatomy & physiology of many sensory& processing organs with stored programmability. this has been called “sensor revolution” with cheap sensors &mems arrays in desired forms of artificial eyes, ears, nose etc. Such a computer is capable of computing 3 trillion equivalent digital operations/sec, a performance that can be only matched by super computers. CNN chips are mainly used in processing brain-like tasks due to its unique architecture which are non-numeric &spatio temporal in nature and will require no more than accuracy of common neurons.

Hyperplane

Added on: March 10th, 2012 by No Comments

A new concept for flight to orbit is described in this paper. It involves mass addition to an ascending, air-breathing, hypersonic lifting vehicle. General laws for flight to orbit with mass addition are developed, and it is shown that payload capabilities are higher than even the most advanced rocket launchers. Hyperplanes are multipurpose, fully reusable aerospace vehicles. These vehicles are using air-breathing engines and can take-off from any conventional airport. They are multipurpose vehicles in the sense that they be used for passenger or freight transport as well as satellite launching. Detailed studies emerged a new concept for hydrogen-fuelled, horizontal take off, fully reusable, single stage hypersonic vehicle, called HYPERPLANE. Avatar, a mini-aerospace plane, design is a technology demonstrator for hypersonic transportation. It is a scaled down version of hyperplane.

Hyperplanes are multipurpose, fully reusable aerospace vehicles. These vehicles are using air-breathing engines and can take-off from any conventional airport. They are multipurpose vehicles in the sense that they be used for passenger or freight transport as well as satellite launching. The key enabling technology for hyper planes is scramjet engines which uses air breathing engine technology. The hyperplanes requires a booster rocket which will give it the supersonic velocity required for scramjet operation. Thus the hyperplanes require normal jet engines for horizontal take off, then a rocket to boost the velocity and a scramjet to sustain the hypersonic speed. Once operational it can even launch satellites at lesser costs compared to rockets. Many nations are working on hyperplane technology, which include USA, Russia, India etc.The only successful hypersonic flight was shown by X-43 of USA.The hyperplane Avatar which is developed by India is expected to be used as a reusable missile launcher. This would be the most modern technology which will revolutionize the modern day’s travel.Here we will discus about the working ,advantages ,disadvantages and various examples of hyperplanes.

WiMAX

Added on: March 9th, 2012 by No Comments

In recent years, Broadband technology has rapidly become an established, global commodity required by a high percentage of the population. The demand has risen rapidly, with a worldwide installed base of 57 million lines in 2002 rising to an estimated 80 million lines by the end of 2003. This healthy growth curve is expected to continue steadily over the next few years and reach the 200 million mark by 2006. DSL operators, who initially focused their deployments in densely-populated urban and metropolitan areas, are now challenged to provide broadband services in suburban and rural areas where new markets are quickly taking root. Governments are prioritizing broadband as a key political objective for all citizens to overcome the “broadband gap” also known as “digital divide”.

Wireless DSL (WDSL) offers an effective, complementary solution to wireline DSL, allowing DSL operators to provide broadband service to additional areas and populations that would otherwise find themselves outside the broadband loop. Government regulatory bodies are realizing the inherent worth in wireless technologies as a means for solving digital-divide challenges in the last mile and have accordingly initiated a deregulation process in recent years for both licensed and unlicensed bands to support this application. Recent technological advancements and the formation of a global standard and interoperability forum – WiMAX, set the stage for WDSL to take a significant role in the broadband market. Revenues from services delivered via Broadband Wireless Access have already reached $323 million and are expected to jump to $1.75 billion.

Abrasive Blast Cleaning

Added on: March 9th, 2012 by No Comments

An abrasive is a material, often a mineral, that is used to shape or finish a workpiece through rubbing which leads to part of the workpiece being worn away. While finishing a material often means polishing it to gain a smooth, reflective surface it can also involve roughening as in satin, matte or beaded finishes.

Abrasives are extremely commonplace and are used very extensively in a wide variety of industrial, domestic, and technological applications. This gives rise to a large variation in the physical and chemical composition of abrasives as well as the shape of the abrasive. Common uses for abrasives include grinding, polishing, buffing, honing, cutting, drilling, sharpening, and sanding (see abrasive machining). (For simplicity, “mineral” in this article will be used loosely to refer to both minerals and mineral-like substances whether man-made or not.)

Files act by abrasion but are not classed as abrasives as they are a shaped bar of metal. However, diamond files are a form of coated abrasive (as they are metal rods coated with diamond powder).

Abrasives give rise to a form of wound called an abrasion or even an excoriation. Abrasions may arise following strong contract with surfaces made things such as concrete, stone, wood, carpet, and roads, though these surfaces are not intended for use as abrasives.

Biometric Systems

Added on: March 9th, 2012 by No Comments

A biometric is defined as a unique, measurable, biological characteristic or trait for automatically recognizing or verifying the identity of a human being. Statistically analyzing these biological characteristics has become known as the science of biometrics. These days, biometric technologies are typically used to analyze human characteristics for security purposes. Five of the most common physical biometric patterns analyzed for security purposes are the fingerprint, hand, eye, face, and voice. The use of biometric characteristics as a means of identification. In this paper we will give a brief overview of the field of biometrics and summarize some of its advantages, disadvantages, strengths, limitations, and related privacy concerns. We will also look at how this process has been refined over time and how it currently works.

DNA Computing

Added on: March 9th, 2012 by No Comments

Molecular biologists are beginning to unravel the information-processing tools such as enzymes that evolution has spent billions of years refining. These tools are now been taken in large numbers of DNA molecules and using them as biological computer processors.

Dr. Leonard Adleman, a well-known scientist, found a way to exploit the speed and efficiency of the biological reactions to solve the “Hamiltonian path problem”, also known as the “traveling salesman problem”.

Based on Dr. Adleman’s experiment, we will explain DNA computing, its algorithms, how to manage DNA based computing and the advantages and disadvantages of DNA computing.

DNA Computing in Security

Added on: March 9th, 2012 by No Comments

As modern encryption algorithms are broken, the world of information security looks in new directions to protect the data it transmits. The concept of using DNA computing in the fields of cryptography and steganography has been identified as a possible technology that may bring forward a new hope for unbreakable algorithms. Is the fledgling field of DNA computing the next cornerstone in the world of information security or is our time better spent following other paths for our data encryption algorithms of the future?

This paper will outline some of the basics of DNA and DNA computing and its use in the areas of cryptography, steganography and authentication.

Research has been performed in both cryptographic and steganographic situations with respect to DNA computing. The constraints of its high tech lab requirements and computational limitations combined with the labour intensive extrapolation means, illustrate that the field of DNA computing is far from any kind of efficient use in today’s security world. DNA authentication on the other hand, has exhibited great promise with real world examples already surfacing on the marketplace today.

Ozone Water Treatment

Added on: March 8th, 2012 by No Comments

Purifying Water With Ozone.

Ozone has been used in Europe for water treatment since early in the 20th century. Initial applications were to disinfect relatively clean spring or well water, but they increasingly evolved to also oxidize contaminants common to surface waters. Since World War II, ozonation has become the primary method to assure clean water in Switzerland, West Germany and France. More recently, major fresh water and waste water treatment facilities using ozone water treatment methods have been constructed throughout the world.

Relatively, the use of ozone for water treatment and purification in the United States has been much more limited. However, the use of ozone has been increasing here in the US, particularly over the last decade as the negative effects of chlorination have become more apparent. For example, a modern water treatment plant in the USA has been built by the City of Los Angeles to use ozone for primary disinfection and microflocculation of as much as 600 million gallons of water per day. An East Texas power utility will be the first small water utility service in Texas to use ozone water treatment technology for drinking water purification. They have hired BiOzone for this task.

In the field of creative ozone water treatment, BiOzone designs, manufactures, and installs the finest ozone generator systems produced today. Each component interfaces harmoniously with the others to achieve the most cost effective and optimum performance. These process trains are installed around the world in a variety of industries. The solutions we offer are nowhere else available in the world.

Symbian OS

Added on: March 8th, 2012 by No Comments

Symbian OS is the operating system licensed by the world’s leading mobile phone manufacturers. Symbian OS is designed for the specific requirements of open, data-enabled 2G, 2.5G and 3G mobile phone.Key features of symbian,how symbian supports modern features of mobile phones are discussed briefly.

The Symbian platform was created by merging and integrating software assets contributed by Nokia, NTT DoCoMo, Sony Ericsson and Symbian Ltd., including Symbian OS assets at its core, the S60 platform, and parts of the UIQ and MOAP(S) user interfaces.

Symbian is a mobile operating system (OS) and computing platform designed for smartphones and currently maintained by Accenture.[7] The Symbian platform is the successor to Symbian OS and Nokia Series 60; unlike Symbian OS, which needed an additional user interface system, Symbian includes a user interface component based on S60 5th Edition. The latest version, Symbian^3, was officially released in Q4 2010, first used in the Nokia N8. In May 2011 an update, Symbian Anna, was officially announced, followed by Nokia Belle (previously Symbian Belle) in August 2011.[8][9]
Symbian OS was originally developed by Symbian Ltd.[10] It is a descendant of Psion’s EPOC and runs exclusively on ARM processors, although an unreleased x86 port existed.

Hydro Drive

Added on: March 8th, 2012 by No Comments

Hydro forming is the process by which water pressure is used to form complex shapes from sheet or tube material.

Applications of hydro forming in automotive industry are gaining globally in popularity. The trend in auto manufacturing of making parts lighter and more complicated with necessary strength reinforcement only where required is on the rise. The capability of hydro forming is more fully used to produce such complicated parts.

Tyres

Added on: March 8th, 2012 by No Comments

The primary purpose of the tyre is to provide traction.

  1. Tyres also help the suspension absorb road shocks, but this is a side benefit.
  2. They must perform under variety of conditions. The road might be wet or dry; paved with asphalt, concrete or gravel; or there might be no road at all.
  3. The car might be traveling slowly on a straight road, or moving quickly through curves or over hills. All of these conditions call for special requirements that must be present, at least to some degree, in all tyres.
  4. In addition to providing good traction, tyres are also designed to carry weight of the vehicle, to withstand side thrust over varying speeds and conditions, and to transfer braking and driving torque to the road.
  5. As the tyre rolls on the road, friction is created between the tyre and the road. This friction gives the tyre its traction.
  6. Although good traction is desirable, it must be limited.
  7. Too much traction means there is too much friction.
  8. Too much friction means there is lot of rolling resistance.
  9. Rolling resistance wastes engine power and fuel, therefore it must be kept to a minimal level. This dilemma is a major concern in designing today’s tyres.
  10. The primary purpose of the tyre is to provide traction along with carrying the weight of the vehicle.

Grid Computing

Added on: March 8th, 2012 by No Comments

The Grid has the potential to fundamentally change the way science and engineering are done. Aggregate power of computing resources connected by networks—of the Grid— exceeds that of any single supercomputer by many orders of magnitude. At the same time, our ability to carry out computations of the scale and level of detail required, for example, to study the Universe, or simulate a rocket engine, are severely constrained by available computing power. Hence, such applications should be one of the main driving forces behind the development of Grid computing.
Grid computing is emerging as a new environment for solving difficult problems. Linear and nonlinear optimization problems can be computationally expensive. The resource access and management is one of the most important key factors for grid computing. It requires a mechanism with automatically making decisions ability to support computing tasks collaborating and scheduling.

Grid computing is an active research area which promises to provide a flexible infrastructure for complex, dynamic and distributed resource sharing and sophisticated problem solving environments. The Grid is not only a low level infrastructure for supporting computation, but can also facilitate and enable information and knowledge sharing at the higher semantic levels, to support knowledge integration and dissemination.

Light Tree

Added on: March 8th, 2012 by No Comments

The concept of a light-tree is introduced in a wavelength-routed optical network. A light-tree is a point-to-multipoint generalization of a lightpath. A lightpath is a point-to-point all-optical wavelength channel connecting a transmitter at a source node to a receiver at a destination node. Lightpath communication can significantly reduce the number of hops (or lightpaths) a packet has to traverse; and this reduction can, in turn, significantly improve the network’s throughput. We extend the lightpath concept by incorporating an optical multicasting capability at the routing nodes in order to increase the logical connectivity of the network and further decrease its hop distance. We refer to such a point-to-multipoint extension as a light-tree. Light-trees cannot only provide improved performance for unicast traffic, but they naturally can better support multicast traffic and broadcast traffic. In this study, we shall concentrate on the application and advantages of light-trees to unicast and broadcast traffic. We formulate the light-tree-based virtual topology design problem as an optimization problem with one of two possible objective functions: for a given traffic matrix,

(i) Minimize the network-wide average packet hop distance, or,
(ii) Minimize the total number of transceivers in the network. We demonstrate that an optimum light-tree-based virtual topology has clear advantages over an optimum light path-based virtual topology with respect to the above two objectives.

Utility Fog (Nanofog)

Added on: March 7th, 2012 by No Comments

Nanotechnology is based on the concept of tiny, self – replicating robots. The Utility Fog is a very simple extension of this idea. Utility Fog is highly advanced nanotechnology which the Technocratic Union has developed as the ultimate multi-purpose tool. It is a user friendly, completely programmable collection of nanomachines that can form a vast range of machinery, from office pins to space ships. It can simulate any material from gas, liquid and solid and it can even be used in sufficient quantities to implement the ultimate in virtual reality.

With the right programming, the robots can exert any force in any direction on the surface of any object. They can support the object so that it apparently floats in air. They can support a person applying the same pressure that a chair would. A programme running in Utility Fog can thus simulate the physical existence of any object.

Utility Fog should be capable of simulating most everyday materials, dynamically changing its form and forming a substrate for an integrated virtual reality. This paper will examine the basic concept, and explore some of the applications of this material.

Four Wheel Steering System

Added on: March 7th, 2012 by 1 Comment

Four-wheel steering, 4WS, also called rear-wheel steering or all-wheel steering, provides a means to actively steer the rear wheels during turning maneuvers. It should not be confused with four-wheel drive in which all four wheels of a vehicle are powered. It improves handling and help the vehicle make tighter turns.

Production-built cars tend to understeer or, in few instances, oversteer. If a car could automatically compensate for an understeer/oversteer problem, the driver would enjoy nearly neutral steering under varying conditions. 4WS is a serious effort on the part of automotive design engineers to provide near-neutral steering.

The front wheels do most of the steering. Rear wheel turning is generally limited to 50-60 during an opposite direction turn. During a same direction turn, rear wheel steering is limited to about 10-1.50.

When both the front and rear wheels steer toward the same direction, they are said to be in-phase and this produces a kind of sideways movement of the car at low speeds. When the front and rear wheels are steered in opposite direction, this is called anti-phase, counter-phase or opposite-phase and it produces a sharper, tighter turn.

Polyfuses (PPTC)

Added on: March 7th, 2012 by 1 Comment

A fuse is a one time over current protection device employing fusible link that melts after the current exceeds a certain level for a certain length of time. Typically, a wire or chemical compound breaks the circuit when the current exceeds the rated value.

Polyfuse is a new standard for circuit protection .It is resettable. Technically, Polyfuses are not fuses but polymeric positive temperature coefficient thermistors (PPTC). Re-settable fuses provide over current protection and automatic restoration.

Common Rail Direct Injection

Added on: March 7th, 2012 by No Comments

Compared with petrol, diesel is the lower quality product of petroleum family. Diesel particles are larger and heavier than petrol, thus more difficult to pulverize. Imperfect pulverization leads to more unburnt particles, hence more pollutant, lower fuel efficiency
and less power.

Common-rail technology is intended to improve the pulverization process. Conventional direct injection diesel engines must repeatedly generate fuel pressure for each injection. But in the CRDI engines the pressure is built up independently of the injection sequence and remains permanently available in the fuel line. CRDI system that uses an ion sensor to provide real-time combustion data for each cylinder. The common rail upstream of the cylinders acts as an accumulator, distributing the fuel to the injectors at a constant pressure of up to 1600 bar. Here high-speed solenoid valves, regulated by the electronic engine management, separately control the injection timing and the amount of fuel injected for each cylinder as a function of the cylinder’s actual need.

In other words, pressure generation and fuel injection are independent of each other. This is an important advantage of common-rail injection over conventional fuel injection systems as CRDI increases the controllability of the individual injection processes and further refines fuel atomization, saving fuel and reducing emissions. Fuel economy of 25 to 35 % is obtained over a standard diesel engine and a substantial noise reduction is achieved due to a more synchronized timing operation. The principle of CRDi is also used in petrol engines as dealt with the GDI (Gasoline Direct Injection) , which removes to a great extent the draw backs of the conventional carburetors and the MPFI systems.

Cryogenics and its Space Applications

Added on: March 7th, 2012 by Afsal Meerankutty 3 Comments

Cryogenics is the study of how to get to low temperatures and of how materials behave when they get there. Besides the familiar temperature scales of Fahrenheit and Celsius (Centigrade), cryogenicists use other temperature scales, the Kelvin and Rankine temperature scale. Although the apparatus used for spacecraft is specialized, some of the general approaches are the same as used in everyday life. Cryogenics involves the study of low temperatures from about 100 Kelvin to absolute zero.

One interesting feature of materials at low temperatures is that the air condenses into a liquid. The two main gases in air are oxygen and nitrogen. Liquid oxygen, “lox” for short, is used in rocket propulsion. Liquid nitrogen is used as a coolant. Helium, which is much rarer than oxygen or nitrogen, is also used as a coolant. In more detail, cryogenics is the study of how to produce low temperatures or also the study of what happens to materials when you have cooled them down.

Magnetic Refrigeration

Added on: March 7th, 2012 by Afsal Meerankutty 2 Comments

Magnetic refrigeration is a technology that has proven to be environmentally safe. models have shown 25% efficiency improvement over vapor compression systems. In order to make the Magnetic Refrigerator commercially viable, scientists need to know how to achieve larger temperature swings. Two advantages to using Magnetic Refrigeration over vapor compressed systems are no hazardous chemicals used and they can be up to 60% efficient.
There are still some thermal and magnetic hysteresis problems to be solved for these first-order phase transition materials that exhibit the GMCE to become really useful; this is a subject of current research. This effect is currently being explored to produce better refrigeration techniques, especially for use in spacecraft. This technique is already used to achieve cryogenic temperatures in the laboratory setting (below 10K).

The objective of this effort is to determine the feasibility of designing, fabricating and testing a sensor cooler, which uses solid materials as the refrigerant. These materials demonstrate the unique property known as the magneto caloric effect, which means that they increase and decrease in temperature when magnetized/demagnetized. This effect has been observed for many years and was used for cooling near absolute zero. Recently, materials are being developed which have sufficient temperature and entropy change to make them useful for a wide range of temperature applications. The proposed effort includes magneto caloric effect material selection, analyses, design and integration of components into a preliminary design. Benefits of this design are lower cost, longer life, lower weight and higher efficiency because it only requires one moving part – the rotating disk on which the magneto caloric material is mounted. The unit uses no gas compressor, no pumps, no working fluid, no valves, and no ozone-destroying chlorofluorocarbons/hydro chlorofluorocarbons (CFC’s/HCFC’s). Potential commercial applications include cooling of electronics, super conducting components used in telecommunications equipment (cell phone base stations), home and commercial refrigerators, heat pumps, air conditioning for homes, offices and automobiles, and virtually any place that refrigeration is needed.

Cell Phone Virus and Security

Added on: March 7th, 2012 by No Comments

Rapid advances in low-power computing, communications, and storage technologies continue to broaden the horizons of mobile devices, such as cell phones and personal digital assistants (PDAs). As the use of these devices extends into applications that srequire them to capture, store, access, or communicate sensitive data, (e.g., mobile ecommerce,financial transactions, acquisition and playback of copyrighted content, etc.) security becomes an immediate concern. Left unaddressed, security concerns threaten to impede the deployment of new applications and value-added services, which is an important engine of growth for the wireless, mobile appliance and semiconductor industries. According to a survey of mobile appliance users, 52% cited security concerns as the biggest impediment to their adoption of mobile commerce.

A cell-phone virus is basically the same thing as a computer virus — an unwanted executable file that “infects” a device and then copies itself to other devices. But whereas a computer virus or worm spreads through e-mail attachments and Internet downloads, a cell-phone virus or worm spreads via Internet downloads, MMS (multimedia messaging service) attachments and Bluetooth transfers. The most common type of cell-phone infection right now occurs when a cell phone downloads an infected file from a PC or the Internet, but phone-to-phone viruses are on the rise.
Current phone-to-phone viruses almost exclusively infect phones running the Symbian operating system. The large number of proprietary operating systems in the cell-phone world is one of the obstacles to mass infection. Cell-phone-virus writers have no Windows-level marketshare to target, so any virus will only affect a small percentage of phones.

Infected files usually show up disguised as applications like games, security patches, add-on functionalities and free stuff. Infected text messages sometimes steal the subject line from a message you’ve received from a friend, which of course increases the likelihood of your opening it — but opening the message isn’t enough to get infected. You have to choose to open the message attachment and agree to install the program, which is another obstacle to mass infection: To date, no reported phone-to-phone virus auto-installs. The installation obstacles and the methods of spreadinglimit the amount of damage the current generation of cell-phone virus can do.

Standard operating systems and Bluetooth technology will be a trend for future cell phone features. These will enable cellphone viruses to spread either through SMS or by sending Bluetooth requests when cellphones are physically close enough. The difference in spreading methods gives these two types of viruses’ different epidemiological characteristics. SMS viruses’ spread is mainly based on people’s social connections, whereas the spreading of Bluetooth viruses is affected by people’s mobility patterns and population distribution. Using cellphone data recording calls, SMS and locations of more than 6 million users, we study the spread of SMS and Bluetooth viruses and characterize how the social network and the mobility of mobile phone users affect such spreading processes.

Broadband Over Powerline

Added on: March 7th, 2012 by No Comments

Despite the spread of broadband technology in the last few years, there are significant areas of the world that don’t have access to high-speed Internet. When weighed against the relatively small number of customers Internet providers would gain, the incremental expenditures of laying cable and building the necessary infrastructure to provide DSL or cable in many areas, especially rural , is too great. But if broadband could be served through power lines, there would be no need to build a new infrastructure. Anywhere there is electricity there could be broadband. Technology to deliver high-speed data over the existing electric power delivery network is closer to reality in the marketplace. Broadband OverPowerline, is positioned to offer an alternative means of providing high-speed internet access, VoicE over Internet Protocol (VoIP), and other broadband services, using medium – and low – voltage lines to reach customers’ homes and businesses. By combining the technological principles of radio, wireless networking, and modems, developers have created a way to send data over power lines and into homes at speeds between 500 kilobits and 3 megabits per second (equivalent to DSL and cable). By modifying the current power grids with specialized equipment, the BPL developers could partner with power companies and Internet service providers to bring broadband to everyone with access to electricity

Electronically Controlled Suspension System

Added on: March 6th, 2012 by 1 Comment

An electronically controlled vehicle suspension system that can be switched between a hard and a soft state has a mode selection switch that permits an alternative choice between a hard mode that puts the suspension in the hard state and an automatic mode that automatically puts the suspension into the soft or hard state depending upon the vehicle conditions. A control device is also provided that puts the suspension in the automatic mode, in preference to the mode to which the mode selection switches has been set, when the engine key switch is turned on for the start. This electronically controlled suspension system is therefore always in the automatic mode when the engine has been started so that the suspension is automatically switched between the soft and hard states depending on the vehicle conditions, thereby assuring a comfortable ride at all times.

Flying Windmills

Added on: March 4th, 2012 by No Comments

High Altitude Wind Power uses flying electric generator (FEG) technology in the form of what have been more popularly called flying windmills, is a proposed renewable energy project over rural or low-populated areas, to produce around 12,000 MW of electricity with only 600 well clustered rotorcraft kites that use only simple autogyro physics to generate far more kinetic energy than a nuclear plant can.

According to Sky WindPower; the overuse of fossil fuels and the overabundance of radioactive waste from nuclear energy plants is taking our planet once again down a path of destruction, for something that is more expensive and far more dangerous in the long run. FEG technology is just cheaper, cleaner and can provide more energy than those environmentally unhealthy methods of the past, making it a desirable substitute/alternative.

The secret to functioning High Altitude Wind Power is efficient tether technology that reaches 15,000 feet in the air, far higher than birds will fly, but creating restricted airspace for planes and other aircraft.

The same materials used in the tethers that hold these balloons in place can also hold flying windmills in place; and with energy cable technology getting ever lighter and stronger .Flying windmills appear to be 90 percent more energy efficient in wind tunnel tests than their land-based counterparts; that is three times more efficiency due to simple yet constantly abundant and effective high altitude wind power, available only 15,000 feet in the air by way of clustered rotor craft kites tethered with existing anti-terrorist technologies like those used on the Mexican/American border radar balloons.

High Altitude Wind Power offers itself as a clean and more powerful source of power generation than anything available on-the-grid at present and if Sky WindPower Corp. has their way, FEG technology and flying windmills will take the lead of a more sustainable future within the decade.

Smart Note Taker

Added on: March 4th, 2012 by No Comments

The Smart NoteTaker is such a helpful product that satisfies the needs of the people in today’s technologic and fast life. This product can be used in many ways. The Smart NoteTaker provides taking fast and easy notes to people who are busy one’s self with something. With the help of Smart NoteTaker, people will be able to write notes on the air, while being busy with their work. The written note will be stored on the memory chip of the pen, and will be able to read in digital medium after the job has done. This will save time and facilitate life.

The Smart NoteTaker is good and helpful for blinds that think and write freely. Another place, where our product can play an important role, is where two people talks on the phone. The subscribers are apart from each other while their talk, and they may want to use figures or texts to understand themselves better. It’s also useful especially for instructors in presentations. The instructors may not want to present the lecture in front of the board. The drawn figure can be processed and directly sent to the server computer in the room. The server computer then can broadcast the drawn shape through network to all of the computers which are present in the room. By this way, the lectures are aimed to be more efficient and fun. This product will be simple but powerful. The product will be able to sense 3D shapes and motions that user tries to draw. The sensed information will be processed and transferred to the memory chip and then will be monitored on the display device. The drawn shape then can be broadcasted to the network or sent to a mobile device.

There will be an additional feature of the product which will monitor the notes, which were taken before, on the application program used in the computer. This application program can be a word document or an image file. Then, the sensed figures that were drawn onto the air will be recognized and by the help of the software program we will write, the desired character will be printed in the word document. If the application program is a paint related program, then the most similar shape will be chosen by the program and then will be printed on the screen.

Since, JAVA Applet is suitable for both the drawings and strings, all these applications can be put together by developing a single JAVA program. The JAVA code that we will develop will also be installed on the pen so that the processor inside the pen will type and draw the desired shape or text on the display panel.

Speech Recognition

Added on: March 4th, 2012 by No Comments

Language is human beings most important means of communication and speech its primary medium. Speech provides an international forum for communication among researchers in the disciplines that contribute to our understanding of the production, perception, processing, learning and use. Spoken interaction both between human interlocutors and between humans and machines is inescapably embedded in the laws and conditions of Communication, which comprise the encoding and decoding of meaning as well as the mere transmission of messages over an acoustical channel. Here we deal with this interaction between the man and machine through synthesis and recognition applications.
The paper dwells on the speech technology and conversion of speech into analog and digital waveforms which is understood by the machines.

Speech recognition, or speech-to-text, involves capturing and digitizing the sound waves, converting them to basic language units or phonemes, constructing words from phonemes, and contextually analyzing the words to ensure correct spelling for words that sound alike. Speech Recognition is the ability of a computer to recognize general, naturally flowing utterances from a wide variety of users. It recognizes the caller’s answers to move along the flow of the call.
We have emphasized on the modeling of speech units and grammar on the basis of Hidden Markov Model. Speech Recognition allows you to provide input to an application with your voice. The applications and limitations on this subject has enlightened us upon the impact of speech processing in our modern technical field.
While there is still much room for improvement, current speech recognition systems have remarkable performance. We are only humans, but as we develop this technology and build remarkable changes we attain certain achievements. Rather than asking what is still deficient, we ask instead what should be done to make it efficient….

Precision Engineering and Practice

Added on: March 4th, 2012 by No Comments

There are three terms often used in precision practices and they are often used incorrectly or in a vague manner. The terms are accuracy, repeatability, and resolution. Because the present discussion is on machining and fabrication methods, the definitions will be in terms related to machine tools. However, these terms have applicability to metrology, instrumentation, and experimental procedures, as well.

Precision engineering deals with many sources of error and its solution. Precision is the most important think in the manufacturing field. Machining is the important part of manufacturing process. Many factor like feedback variables, machine tool variables, spindle variabls,wokpice vaiabls,envronmantal effect thermal errors etc.. affect the accuracy of machine. Main goal of precision engineering is to reduce the uncertainty of dimensions. Achieve the exact dimension is vary difficult . So tolerance is allowed on work piece.

High Efficiency Miller Cycle Gas Engine

Added on: March 4th, 2012 by No Comments

The output of gas engines for cogeneration mainly ranges from 100 to 1 000 kW. The present gas engines are broadly classified in two types: lean-burn system (1) and stoichiometric air-fuel ratio combustion system, with the lower output engines using the stoichiometric Air-fuel ratio combustion system while the medium and large size engines adopting the lean-burn system. The lean-burn system generally features in high generating efficiency and low NOx emission in addition to the excellent endurance of the low-temperature combustion flame.

Mitsubishi Heavy Industries, Ltd. (MHI) and Osaka Gas Co., Ltd. have jointly applied the Miller cycle to a lean-burn gas engine to develop the world’s first gas engine in this class with the generating efficiency standing at 40%. With the 280 kW engine released commercially in April 2000 after having cleared the endurance test over 4 000 hours, this paper describes the main technologies and performance specifications for this engine as well as for the series of engines planned in the future.

Nanotechnology in Mechanical Engineering

Added on: March 4th, 2012 by No Comments

We live in a world of machines. And the technical foundation for these machines lies in the steam engine developed during the 1780s by James Watt. The concept of deriving useful mechanical work from raw fuel such as wood, coal, oil, and now uranium was revolutionary. Watt also developed the slider-crank mechanism to convert reciprocating motion to rotary motion.

To improve on this first, basic engine, the people who followed Watt created the science of thermodynamics and perfected power transmission through gears, cams, shafts, bearings, and mechanical seals. A new vocabulary involving heat, energy, power, and torque was born with the steam engine.

MEMS Switches

Added on: March 4th, 2012 by No Comments

Compound solid state switches such as GaAs MESFETs and PIN diodes are widely used in microwave and millimeter wave integrated circuits (MMICs) for telecommunications applications including signal routing, impedance matching networks, and adjustable gain amplifiers. However, these solid-state switches have a large insertion loss (typically 1 dB) in the on state and poor electrical isolation in the off state. The recent developments of micro-electro-mechanical systems (MEMS) have been continuously providing new and improved paradigms in the field of microwave applications. Different configured micro machined miniature switches have been reported. Among these switches, capacitive membrane microwave switching devices present lower insertion loss, higher isolation, better nonlinearity and zero static power consumption. In this presentation, we describe the design, fabrication and performance of a surface micro machined capacitive microwave switch on glass substrate using electroplating techniques.

Landmine Detection

Added on: March 4th, 2012 by No Comments

Landmines and unexploded ordnance (UXO) are a legacy of war, insurrection, and guerilla activity. Landmines kill and maim approximately 26,000 people annually. In Cambodia, whole areas of arable land cannot be farmed due to the threat of landmines. United Nations relief operations are made more difficult and dangerous due to the mining of roads. Current demining techniques are heavily reliant on metal detectors and prodders.

Technologies are used for landmine detection are:

  • • Metal detectors— capable of finding even low-metal content mines in mineralized soils.
    • Nuclear magnetic resonance, fast neutron activation and thermal neutron activation.
    • Thermal imaging and electro-optical sensors— detect evidence of buried objects.
    • Biological sensors such as dogs, pigs, bees and birds.
    • Chemical sensors such as thermal fluorescence— detect airborne and waterborne presence of explosive vapors.

In this discussion, we will concentrate on Ground Penetrating Radar (GPR). This ultra wide band radar provides centimeter resolution to locate even small targets. There are two distinct types of GPR, time-domain and frequency domain. Time domain or impulse GPR transmites discrete pulses of nanosecond duration and digitizes the returns at GHz sample rates. Frequency domain GPR systems transmit single frequencies either uniquely, as a series of frequency steps, or as a chirp. The amplitude and phase of the return signal is measured. The resulting data is converted to the time domain. GPR operates by detecting the dielectric contrasts in the soils, which allows it to locate even non metallic mines.

In this discussion we deal with buried anti-tank (AT) and anti-personnel (AP) landmines which require close approach or contact to activate. AT mines range from about 15 to 35 cm in size. They are typically buried up to 40cm deep, but they can also be deployed on the surface of a road to block a column of machinery. AP mines range from about 5 to 15cm in size. AT mines which are designed to impede the progress of destroy vehicles and AP mines which are designed to kill and maim people.

Intrusion Detection Systems

Added on: March 4th, 2012 by No Comments

An intrusion is an active sequence of related events that deliberately try to cause harm, such as rendering a system unusable, accessing unauthorized information or manipulating such information. To record the information about both successful and unsuccessful attempts, the security professionals place the devices that examine the network traffic, called sensors. These sensors are kept in both front of the firewall (the unprotected area) and behind the firewall (the protected area) and values through comparing the information recorded by the two.

An Intrusion Detection Systems(IDS) can be defined as the tool, methods and resources to help identity, access and report unauthorized activity. Intrusion Detection is typically one part of an overall protection system that is installed around a system or device. IDS work at the network layer of the OSI model and sensors are placed at the choke points on the network. They analyze packets to find specific patterns in the network traffic- if they find such a pattern in the traffic, an alert is logged and a response can be based on data recorded

Power Monitoring System

Added on: March 3rd, 2012 by 1 Comment

The microcontroller-based power monitoring system is an electronic device used to continuously monitor the parameters of power such as voltage, current, frequency, etc at the various points of an electric or electronics devices. The entire system is controlled by the microcontroller (80C31) and it is the real time monitoring of various parameters. Thus the name “REAL TIME MONITORING OF HIGH CAPACITY (400KVA-600KVA) BATTERY BACK UP SYSTEM”. This system is an 8-channel device, which accepts 8 analog input signals and consist of analog multiplexer, A/D converter, ROM, RAM, buffer etc. The different channels are selected by simple switch operation. From the UPS the channels and alarms are given to the microcontroller and it will process and control the parameters.

Neuro Chips

Added on: March 3rd, 2012 by No Comments

Until recently, neurobiologists have used computers for simulation, data collection, and data analysis, but not to interact directly with nerve tissue in live, behaving animals. Although digital computers and nerve tissue both use voltage waveforms to transmit and process information, engineers and neurobiologists have yet to cohesively link the electronic signaling of digital computers with the electronic signaling of nerve tissue in freely behaving animals.

Recent advances in microelectromechanical systems (MEMS), CMOS electronics, and embedded computer systems will finally let us link computer circuitry to neural cells in live animals and, in particular, to reidentifiable cells with specific, known neural functions. The key components of such a brain-computer system include neural probes, analog electronics, and a miniature microcomputer. Researchers developing neural probes such as sub- micron MEMS probes, microclamps, microprobe arrays, and similar structures can now penetrate and make electrical contact with nerve cells with out causing significant or long-term damage to probes or cells.

Researchers developing analog electronics such as low-power amplifiers and analog-to-digital converters can now integrate these devices with micro- controllers on a single low-power CMOS die. Further, researchers developing embedded computer systems can now incorporate all the core circuitry of a modern computer on a single silicon chip that can run on miniscule power from a tiny watch battery. In short, engineers have all the pieces they need to build truly autonomous implantable computer systems.

Until now, high signal-to-noise recording as well as digital processing of real-time neuronal signals has been possible only in constrained laboratory experiments. By combining MEMS probes with analog electronics and modern CMOS computing into self-contained, implantable Microsystems, implantable computers will free neuroscientists from the lab bench.

Artificial Eye

Added on: March 3rd, 2012 by No Comments

The retina is a thin layer of neural tissue that lines the back wall inside the eye. Some of these cells act to receive light, while others interpret the information and send messages to the brain through the optic nerve. This is part of the process that enables us to see. In damaged or dysfunctional retina, the photoreceptors stop working, causing blindness. By some estimates, there are more than 10 million people worldwide affected by retinal diseases that lead to loss of vision.

The absence of effective therapeutic remedies for retinitis pigmentosa (RP) and age-related macular degeneration (AMD) has motivated the development of experimental strategies to restore some degree of visual function to affected patients. Because the remaining retinal layers are anatomically spared, several approaches have been designed to artificially activate this residual retina and thereby the visual system.

At present, two general strategies have been pursued. The “Epiretinal” approach involves a semiconductor-based device placed above the retina, close to or in contact with the nerve fiber layer retinal ganglion cells. The information in this approach must be captured by a camera system before transmitting data and energy to the implant. The “Sub retinal” approach involves the electrical stimulation of the inner retina from the sub retinal space by implantation of a semiconductor-based micro photodiode array (MPA) into this location. The concept of the sub retinal approach is that electrical charge generated by the MPA in response to a light stimulus may be used to artificially alter the membrane potential of neurons in the remaining retinal layers in a manner to produce formed images.

Some researchers have developed an implant system where a video camera captures images, a chip processes the images, and an electrode array transmits the images to the brain. It’s called Cortical Implants.

Abrasive Jet Machining

Added on: March 3rd, 2012 by No Comments

Abrasive water jet machine tools are suddenly being a hit in the market since they are quick to program and could make money on short runs. They are quick to set up, and offer quick turn-around on the machine. They complement existing tools used for either primary or secondary operations and could make parts quickly out of virtually out of any material. One of the major advantage is that they donot heat the material. All sorts of intricate shapes are easy to make. They turns to be a money making machine.

So ultimately a machine shop without a water jet , is like a carpenter with out a hammer. Sure the carpenter can use the back of his crow bar to hammer in nails, but there is a better way. It is important to understand that abrasive jets are not the same thing as the water jet although they are nearly the same. Water Jet technology has been around since the early 1970s or so, and abrasive jets extended the concept about ten years later. Both technology use the principle of pressuring water to extremely high pressure, and allowing the water to escape through opening typically called the orifice or jewel. Water jets use the beam of water exiting the orifice to cut soft stuffs like candy bars, but are not effective for cutting harder materials. The inlet water is typically pressurized between 20000 and 60000 Pounds Per Square Inch (PSI). This is forced through a tiny wall in the jewel which is typically .007” to .015” diameter (0.18 to0.4 mm) . This creates a vary high velocity beam of water. Abrasive jets use the same beam of water to accelerate abrasive particles to speeds fast enough to cut through much faster material.

Aerospace Flywheel Development

Added on: March 3rd, 2012 by No Comments

Presently, energy storage on the Space Station and satellites is accomplished using chemical batteries; most commonly nickel hydrogen or nickel cadmium. A flywheel energy storage system is an alternative technology that is being considered for future space missions. Flywheels offer the advantage of a longer lifetime, higher efficiency and a greater depth of discharge than batteries. A flywheel energy storage system is being considered as a replacement for the traditional electrochemical battery system in spacecraft electrical power systems. The flywheel system is expected to improve both the depth of discharge and working life by a factor of 3 compared with its battery counterpart. Although flywheels have always been used in spacecraft navigation and guidance systems, their use for energy storage is new. However, the two functions can easily be combined into a single system. Several advanced technologies must be demonstrated for the flywheel energy storage system to be a viable option for future space missions. These include high strength composite materials, highly efficient high speed motor operation and control, and magnetic bearing levitation.

High Speed Trains

Added on: March 3rd, 2012 by No Comments

When English inventor Richard Trevithick introduced the steam locomotive on 21 February 1804 in Wales, it achieved a speed of 8 km/h (5 mph). In 1815, Englishman George Stephenson built the world’s first workable steam locomotive. In 1825, he introduced the first passenger train, which steamed along at 25 km/h (16 mph). Today, trains can fly down the tracks at 500 km/h (311 mph). And fly they do, not touching the tracks.

There is no defined speed at which you can call a train a high speed train but trains running at and above150 km/h are called High Speed Trains.

Vehicle Skid Control

Added on: March 2nd, 2012 by 1 Comment

Vehicle skid can be defined as the loss of traction between a vehicle’s tyres and the road surface due to the forces acting on the vehicle. Most skids are caused by driver error, although only about 15% of accidents are the direct result of a vehicle skidding. Skids occurring in other accidents are usually the result of last minute action, by the driver, when faced with a crisis ahead rather than actually causing an accident. Skids can occur both in the dry and wet as well as icy conditions, however, the chances of losing control and having an accident increases by 50% in the wet. The most common type of skid we will be confronted with is when the rear end of the car slides out, causing an oversteer or when the front of the car plows toward the outside of a turn without following the curve of the turn causing an understeer. Usually, oversteer occurs as a result of going into a corner too fast or incorrectly hitting a slick area, causing the rear wheels to oversteer. A third skid called the four wheel skid can also occur, where all the four wheels lock up and the vehicle slides in the direction where the forward momentum is carrying it, with no directional control.

To counter these skids and to prevent accidents from happening, Vehicle Skid Control (VSC) is incorporated in the vehicle. Vehicle Skid Control (VSC) takes the safety aspects of the driver and the vehicle to the next level. It comes under the category of “Passive Technology”, which helps you to avoid a crash. Vehicle Skid Control (VSC) senses the onset of traction loss and helps the driver stay on track. This is achieved via the system’s ability to reduce engine power and to control the brake actuator. VSC helps the driver maintain vehicle traction under demanding conditions by detecting and helping to correct the wheel spin. VSC uses a variety of sensor input to determine if the car is losing traction, then applies the brakes to individual wheels to help correct for discrepancies. The system will also back off the throttle to reduce power. VSC integrates traction control to limit rear wheelspin on slippery surfaces. The VSC system electronically monitors speed and direction, and compares the vehicle’s direction of travel with the driver’s steering, acceleration and braking input. VSC can help the driver compensate for loss of lateral traction, which can cause skids and loss of vehicle control.

Access Premium Seminar Reports: Subscribe Now



Sign Up for comprehensive seminar reports & presentations: DOCX, PDF, PPTs.