instrumentation has existed for hundreds of years in one form or another. The oldest manometer invented by Evangelista Torricelli in 1643. The thermometer has been credited to many scientists of about the same period. Over that time, small and large scale industrial plants have always had use for measurements. For the most part, these were passive measuring devices. If a process needed a control, they'd hire someone at a low wage to control the process manually.
World War II brought about a revolution in the use of instrumentation[1]. Advanced processes requires tighter control than people could provide, and advanced instruments were required to provide measurements in modern processes. Also, the war left industry with a substantially reduced workforce. Industrial instrumentation solved both problems, leading to a rise in its use. Pipe fitters had to learn more about instrumentation and control theory, and a new trade was born.[2]
Today, instrument mechanics have more to do with electricians than pipe fitters. Almost all new instrumentation is electronic, using either 4-20mA control signals or digital signalling standards.
Fields of study
Instrument mechanics are required to study a large body of knowledge. This includes information on[4]:
* Process Control
* Measurement Instrumentation
* Final Control Elements
* Motors
* Electronics
* Industrial networks
* Signalling standards
* Chemistry
* Fluid Dynamics
Archives
-
▼
2009
(47)
-
►
October
(15)
- Future Nanotechnology Scope
- computerss nano
- Energy applications of nanotechnology
- Biotechnology is technology based on biology, agri...
- Nanotechnology in artificial intelligence
- Nanotech Robotics Items
- Nanotechnology in aerospace
- computer science to nanotechnology
- Nanotechnology in Medicine
- silver in nanotechnology
- Nanotechnology in Heart, Lung, Blood, and Sleep Me...
- nano trend
- nano for computers
- nanobots
- trading in nano
-
►
October
(15)
nanotechnology
cables
Electrical wiring in general refers to insulated conductors used to carry electricity, and associated devices. This article describes general aspects of electrical wiring as used to provide power in buildings and structures, commonly referred to as building wiring. This article is intended to describe common features of electrical wiring that should apply worldwide.
A cable is two or more wires or ropes running side by side and bonded, twisted or braided together to form a single assembly. In mechanics, cables are used for lifting and hauling; in electricity they are used to carry electrical currents. An optical cable contains one or more optical fibers in a protective jacket that supports the fibers. Mechanical cable is more specifically called wire rope.
Electric cables discussed here are mainly meant for installation in buildings and industrial sites. For power transmission at distances from some km's to 600 km see high voltage cable, power cables and HVDC.
Electrical cables may be made more flexible by stranding the wires. In this process, smaller individual wires are twisted or braided together to produce larger wires that are more flexible than solid wires of similar size. Bunching small wires before concentric stranding adds the most flexibility. Copper wires in a cable may be bare, or they may be coated with a thin layer of another material: most often tin but sometimes gold, silver or some other material. Tin, gold, and silver are much less prone to oxidisation than copper, which may lengthen wire life, and makes soldering easier. Tight lays during stranding makes the cable extensible (CBA - as in telephone handset cords).
Cables can be securely fastened and organized, such as by using cable trees with the aid of cable ties or cable lacing. Continuous-flex or flexible cables used in moving applications within cable carriers can be secured using strain relief devices or cable ties. Copper corrodes easily and so should be layered with Lacquer.
At high frequencies, current tends to run along the surface of the conductor and avoid the core. This is known as the skin effect. It may change the relative desirability of solid versus stranded wires.
Electric trace heating, also known as electric heat tracing or surface heating, is a system used to maintain or raise the temperature of pipes and vessels. Trace heating takes the form of an electrical heating element run in physical contact along the length of a pipe. The pipe must then be covered with thermal insulation to retain heat losses from the pipe. Heat generated by the element then maintains the temperature of the pipe.
A cable is two or more wires or ropes running side by side and bonded, twisted or braided together to form a single assembly. In mechanics, cables are used for lifting and hauling; in electricity they are used to carry electrical currents. An optical cable contains one or more optical fibers in a protective jacket that supports the fibers. Mechanical cable is more specifically called wire rope.
Electric cables discussed here are mainly meant for installation in buildings and industrial sites. For power transmission at distances from some km's to 600 km see high voltage cable, power cables and HVDC.
Electrical cables may be made more flexible by stranding the wires. In this process, smaller individual wires are twisted or braided together to produce larger wires that are more flexible than solid wires of similar size. Bunching small wires before concentric stranding adds the most flexibility. Copper wires in a cable may be bare, or they may be coated with a thin layer of another material: most often tin but sometimes gold, silver or some other material. Tin, gold, and silver are much less prone to oxidisation than copper, which may lengthen wire life, and makes soldering easier. Tight lays during stranding makes the cable extensible (CBA - as in telephone handset cords).
Cables can be securely fastened and organized, such as by using cable trees with the aid of cable ties or cable lacing. Continuous-flex or flexible cables used in moving applications within cable carriers can be secured using strain relief devices or cable ties. Copper corrodes easily and so should be layered with Lacquer.
At high frequencies, current tends to run along the surface of the conductor and avoid the core. This is known as the skin effect. It may change the relative desirability of solid versus stranded wires.
Electric trace heating, also known as electric heat tracing or surface heating, is a system used to maintain or raise the temperature of pipes and vessels. Trace heating takes the form of an electrical heating element run in physical contact along the length of a pipe. The pipe must then be covered with thermal insulation to retain heat losses from the pipe. Heat generated by the element then maintains the temperature of the pipe.
Electrical wiring in general refers to insulated conductors used to carry electricity, and associated devices. This article describes general aspects of electrical wiring as used to provide power in buildings and structures, commonly referred to as building wiring. This article is intended to describe common features of electrical wiring that should apply worldwide.
A cable is two or more wires or ropes running side by side and bonded, twisted or braided together to form a single assembly. In mechanics, cables are used for lifting and hauling; in electricity they are used to carry electrical currents. An optical cable contains one or more optical fibers in a protective jacket that supports the fibers. Mechanical cable is more specifically called wire rope.
Electric cables discussed here are mainly meant for installation in buildings and industrial sites. For power transmission at distances from some km's to 600 km see high voltage cable, power cables and HVDC.
Electrical cables may be made more flexible by stranding the wires. In this process, smaller individual wires are twisted or braided together to produce larger wires that are more flexible than solid wires of similar size. Bunching small wires before concentric stranding adds the most flexibility. Copper wires in a cable may be bare, or they may be coated with a thin layer of another material: most often tin but sometimes gold, silver or some other material. Tin, gold, and silver are much less prone to oxidisation than copper, which may lengthen wire life, and makes soldering easier. Tight lays during stranding makes the cable extensible (CBA - as in telephone handset cords).
Cables can be securely fastened and organized, such as by using cable trees with the aid of cable ties or cable lacing. Continuous-flex or flexible cables used in moving applications within cable carriers can be secured using strain relief devices or cable ties. Copper corrodes easily and so should be layered with Lacquer.
At high frequencies, current tends to run along the surface of the conductor and avoid the core. This is known as the skin effect. It may change the relative desirability of solid versus stranded wires.
Electric trace heating, also known as electric heat tracing or surface heating, is a system used to maintain or raise the temperature of pipes and vessels. Trace heating takes the form of an electrical heating element run in physical contact along the length of a pipe. The pipe must then be covered with thermal insulation to retain heat losses from the pipe. Heat generated by the element then maintains the temperature of the pipe.
A cable is two or more wires or ropes running side by side and bonded, twisted or braided together to form a single assembly. In mechanics, cables are used for lifting and hauling; in electricity they are used to carry electrical currents. An optical cable contains one or more optical fibers in a protective jacket that supports the fibers. Mechanical cable is more specifically called wire rope.
Electric cables discussed here are mainly meant for installation in buildings and industrial sites. For power transmission at distances from some km's to 600 km see high voltage cable, power cables and HVDC.
Electrical cables may be made more flexible by stranding the wires. In this process, smaller individual wires are twisted or braided together to produce larger wires that are more flexible than solid wires of similar size. Bunching small wires before concentric stranding adds the most flexibility. Copper wires in a cable may be bare, or they may be coated with a thin layer of another material: most often tin but sometimes gold, silver or some other material. Tin, gold, and silver are much less prone to oxidisation than copper, which may lengthen wire life, and makes soldering easier. Tight lays during stranding makes the cable extensible (CBA - as in telephone handset cords).
Cables can be securely fastened and organized, such as by using cable trees with the aid of cable ties or cable lacing. Continuous-flex or flexible cables used in moving applications within cable carriers can be secured using strain relief devices or cable ties. Copper corrodes easily and so should be layered with Lacquer.
At high frequencies, current tends to run along the surface of the conductor and avoid the core. This is known as the skin effect. It may change the relative desirability of solid versus stranded wires.
Electric trace heating, also known as electric heat tracing or surface heating, is a system used to maintain or raise the temperature of pipes and vessels. Trace heating takes the form of an electrical heating element run in physical contact along the length of a pipe. The pipe must then be covered with thermal insulation to retain heat losses from the pipe. Heat generated by the element then maintains the temperature of the pipe.
Power electronics
Standby power, also called vampire power, vampire draw, phantom load, or leaking electricity, refers to the electric power consumed by electronic appliances while they are switched off or in a standby mode. A very common "electricity vampire" is a power adapter which has no power-off switch. Some such devices offer remote controls and digital clock features to the user, while other devices, such as power adapters for laptop computers and other electronic devices, consume power without offering any features.
Power electronic converters can be found wherever there is a need to modify the form of electrical energy (i.e modify its voltage, current or frequency). The power range of these converters is from some milliwatts (as in a mobile phone) to hundreds of megawatts (e.g in a HVDC transmission system). With "classical" electronics, electrical currents and voltage are used to carry information, whereas with power electronics, they carry power. Thus, the main metric of power electronics becomes the efficiency.
The first very high power electronic devices were mercury arc valves. In modern systems the conversion is performed with semiconductor switching devices such as diodes, thyristors and transistors. In contrast to electronic systems concerned with transmission and processing of signals and data, in power electronics substantial amounts of electrical energy are processed. An AC/DC converter (rectifier) is the most typical power electronics device found in many consumer electronic devices, e.g., television sets, personal computers, battery chargers, etc. The power range is typically from tens of watts to several hundred watts. In industry the most common application is the variable speed drive (VSD) that is used to control an induction motor. The power range of VSDs start from a few hundred watts and end at tens of megawatts.
Power electronic converters can be found wherever there is a need to modify the form of electrical energy (i.e modify its voltage, current or frequency). The power range of these converters is from some milliwatts (as in a mobile phone) to hundreds of megawatts (e.g in a HVDC transmission system). With "classical" electronics, electrical currents and voltage are used to carry information, whereas with power electronics, they carry power. Thus, the main metric of power electronics becomes the efficiency.
The first very high power electronic devices were mercury arc valves. In modern systems the conversion is performed with semiconductor switching devices such as diodes, thyristors and transistors. In contrast to electronic systems concerned with transmission and processing of signals and data, in power electronics substantial amounts of electrical energy are processed. An AC/DC converter (rectifier) is the most typical power electronics device found in many consumer electronic devices, e.g., television sets, personal computers, battery chargers, etc. The power range is typically from tens of watts to several hundred watts. In industry the most common application is the variable speed drive (VSD) that is used to control an induction motor. The power range of VSDs start from a few hundred watts and end at tens of megawatts.
sports
Golf is a precision club-and-ball sport, in which competing players (golfers), using many types of clubs, attempt to hit balls into each hole on a golf course while employing the fewest number of strokes. Golf is one of the few ball games that does not require a standardized playing area. Instead, the game is played on golf "courses", each of which features a unique design, although courses typically consist of either nine or 18 holes. Golf is defined, in the rules of golf, as "playing a ball with a club from the teeing ground into the hole by a stroke or successive strokes in accordance with the Rules." Golf competition is generally played for the lowest number of strokes by an individual, known simply as stroke play, or the lowest score on the most individual holes during a complete round by an individual or team, known as match play.
The origin of golf is unclear and open to debate. Some historians[who?] trace the sport back to the Roman game of paganica, in which participants used a bent stick to hit a stuffed leather ball. One theory asserts that paganica spread throughout Britain and Europe as the Romans conquered much of the continent, during the first century B.C., and eventually evolved into the modern game.[2] Others cite chuiwan ("chui" means striking and "wan" means small ball) as the progenitor, a Chinese game played between the eighth and 14th centuries.[3] The game is thought to have been introduced into Europe during the Middle Ages. Another early game that resembled modern golf was known as cambuca in England and chambot in France.[4] This game was, in turn, exported to the Low Countries, Germany, and England (where it was called pall-mall, pronounced “pell mell”). Some observers, however, believe that golf descended from the Persian game, chaugán. In addition, kolven (a game involving a ball and curved bats) was played annually in Loenen, Netherlands, beginning in 1297, to commemorate the capture of the assassin of Floris V, a year earlier.
According to the most widely accepted account, however, the modern game originated in Scotland around the 12th century, with shepherds knocking stones into rabbit holes on the current site of the Royal and Ancient Golf Club of St. Andrews.
Cricket is a bat-and-ball team sport that is first documented as being played in southern England in the 16th century. By the end of the 18th century, cricket had developed to the point where it had become the national sport of England. The expansion of the British Empire led to cricket being played overseas and by the mid-19th century the first international matches were being held. Today, the game's governing body, the International Cricket Council (ICC), has 104 member countries.[1] With its greatest popularity in the Test playing countries, cricket is widely regarded as the world's second most popular sport
The origin of golf is unclear and open to debate. Some historians[who?] trace the sport back to the Roman game of paganica, in which participants used a bent stick to hit a stuffed leather ball. One theory asserts that paganica spread throughout Britain and Europe as the Romans conquered much of the continent, during the first century B.C., and eventually evolved into the modern game.[2] Others cite chuiwan ("chui" means striking and "wan" means small ball) as the progenitor, a Chinese game played between the eighth and 14th centuries.[3] The game is thought to have been introduced into Europe during the Middle Ages. Another early game that resembled modern golf was known as cambuca in England and chambot in France.[4] This game was, in turn, exported to the Low Countries, Germany, and England (where it was called pall-mall, pronounced “pell mell”). Some observers, however, believe that golf descended from the Persian game, chaugán. In addition, kolven (a game involving a ball and curved bats) was played annually in Loenen, Netherlands, beginning in 1297, to commemorate the capture of the assassin of Floris V, a year earlier.
According to the most widely accepted account, however, the modern game originated in Scotland around the 12th century, with shepherds knocking stones into rabbit holes on the current site of the Royal and Ancient Golf Club of St. Andrews.
Cricket is a bat-and-ball team sport that is first documented as being played in southern England in the 16th century. By the end of the 18th century, cricket had developed to the point where it had become the national sport of England. The expansion of the British Empire led to cricket being played overseas and by the mid-19th century the first international matches were being held. Today, the game's governing body, the International Cricket Council (ICC), has 104 member countries.[1] With its greatest popularity in the Test playing countries, cricket is widely regarded as the world's second most popular sport
integrated devices
An integrated device manufacturer (IDM) is a semiconductor company which designs, manufactures, and sells integrated circuit (IC) products. As a classification, IDM is often used to differentiate between a company which handles semiconductor manufacturing in-house, and a fabless semiconductor company, which outsources production to a third-party. Due to the dynamic nature of the semiconductor industry, the term IDM has become less accurate than when it was coined.
An Integrated Access Device (or IAD) is a customer premises device that provides access to wide area networks and the Internet. Specifically, it aggregates multiple channels of information including voice and data across a single shared access link to a carrier or service provider PoP (Point of Presence). The access link may be a T1 line, a DSL connection, a cable (CATV) network, a broadband wireless link, or a metro-Ethernet connection.
At the PoP, the customer's aggregated information is typically directed into an Add-drop multiplexer or an MSPP (multiservice provisioning platform), which are complex and expensive devices that sit between customers and the core network. They manage traffic streams coming from customers and forward those streams to the PSTN (voice) or appropriate wide area networks (ATM, frame relay, or the Internet).
An IAD is sometimes installed by the service provider to which a customer wishes to connect. This allows the service provider to control the features of the access link and manage its operation during use. Competitive service providers are now offering access services over a variety of access technologies, including wireless optical (i.e., Terabeam) and metro-Ethernet networks. Old telco protocols and transport methods (T1 lines and time division multiplexing) are replaced with access methods that are appropriate for the underlying transport. Because of this, the provider will usually specify an appropriate IAD, or install an IAD.
An Integrated Access Device (or IAD) is a customer premises device that provides access to wide area networks and the Internet. Specifically, it aggregates multiple channels of information including voice and data across a single shared access link to a carrier or service provider PoP (Point of Presence). The access link may be a T1 line, a DSL connection, a cable (CATV) network, a broadband wireless link, or a metro-Ethernet connection.
At the PoP, the customer's aggregated information is typically directed into an Add-drop multiplexer or an MSPP (multiservice provisioning platform), which are complex and expensive devices that sit between customers and the core network. They manage traffic streams coming from customers and forward those streams to the PSTN (voice) or appropriate wide area networks (ATM, frame relay, or the Internet).
An IAD is sometimes installed by the service provider to which a customer wishes to connect. This allows the service provider to control the features of the access link and manage its operation during use. Competitive service providers are now offering access services over a variety of access technologies, including wireless optical (i.e., Terabeam) and metro-Ethernet networks. Old telco protocols and transport methods (T1 lines and time division multiplexing) are replaced with access methods that are appropriate for the underlying transport. Because of this, the provider will usually specify an appropriate IAD, or install an IAD.
Sensor
A sensor is a device that measures a physical quantity and converts it into a signal which can be read by an observer or by an instrument. For example, a mercury-in-glass thermometer converts the measured temperature into expansion and contraction of a liquid which can be read on a calibrated glass tube. A thermocouple converts temperature to an output voltage which can be read by a voltmeter. For accuracy, all sensors need to be calibrated against known standards.
The resolution of a sensor is the smallest change it can detect in the quantity that it is measuring. Often in a digital display, the least significant digit will fluctuate, indicating that changes of that magnitude are only just resolved. The resolution is related to the precision with which the measurement is made. For example, a scanning tunneling probe (a fine tip near a surface collects an electron tunnelling current) can resolve atoms and molecules.
Automotive oxygen sensors, colloquially known as O2 sensors, make modern electronic fuel injection and emission control possible. They help determine, in real time, if the air fuel ratio of a combustion engine is rich or lean. Since oxygen sensors are located in the exhaust stream, they do not directly measure the air or the fuel entering the engine. But when information from oxygen sensors is coupled with information from other sources, it can be used to indirectly determine the air-to-fuel ratio. Closed-loop feedback-controlled fuel injection varies the fuel injector output according to real-time sensor data rather than operating with a predetermined (open-loop) fuel map. In addition to enabling electronic fuel injection to work efficiently, this emissions control technique can reduce the amounts of both unburnt fuel and oxides of nitrogen from entering the atmosphere. Unburnt fuel is pollution in the form of air-borne hydrocarbons, while oxides of nitrogen (NOx gases) are a result of combustion chamber tempuratures exceeding 1300 Kelvin due to excess air in the fuel mixture and contribute to smog and acid rain. Volvo was the first automobile manufacturer to employ this technology in the late 1970s, along with the 3-way catalyst used in the catalytic converter.
The resolution of a sensor is the smallest change it can detect in the quantity that it is measuring. Often in a digital display, the least significant digit will fluctuate, indicating that changes of that magnitude are only just resolved. The resolution is related to the precision with which the measurement is made. For example, a scanning tunneling probe (a fine tip near a surface collects an electron tunnelling current) can resolve atoms and molecules.
Automotive oxygen sensors, colloquially known as O2 sensors, make modern electronic fuel injection and emission control possible. They help determine, in real time, if the air fuel ratio of a combustion engine is rich or lean. Since oxygen sensors are located in the exhaust stream, they do not directly measure the air or the fuel entering the engine. But when information from oxygen sensors is coupled with information from other sources, it can be used to indirectly determine the air-to-fuel ratio. Closed-loop feedback-controlled fuel injection varies the fuel injector output according to real-time sensor data rather than operating with a predetermined (open-loop) fuel map. In addition to enabling electronic fuel injection to work efficiently, this emissions control technique can reduce the amounts of both unburnt fuel and oxides of nitrogen from entering the atmosphere. Unburnt fuel is pollution in the form of air-borne hydrocarbons, while oxides of nitrogen (NOx gases) are a result of combustion chamber tempuratures exceeding 1300 Kelvin due to excess air in the fuel mixture and contribute to smog and acid rain. Volvo was the first automobile manufacturer to employ this technology in the late 1970s, along with the 3-way catalyst used in the catalytic converter.
scada
SCADA stands for supervisory control and data acquisition. It generally refers to an industrial control system: a computer system monitoring and controlling a process. The process can be industrial, infrastructure or facility based as described below:
* Industrial processes include those of manufacturing, production, power generation, fabrication, and refining, and may run in continuous, batch, repetitive, or discrete modes.
* Infrastructure processes may be public or private, and include water treatment and distribution, wastewater collection and treatment, oil and gas pipelines, electrical power transmission and distribution, civil defense siren systems, and large communication systems.
* Facility processes occur both in public facilities and private ones, including buildings, airports, ships, and space stations. They monitor and control HVAC, access, and energy consumption.
There is, in several industries, considerable confusion over the differences between SCADA systems and Distributed control systems (DCS). Generally speaking, a SCADA system usually refers to a system that coordinates, but does not control processes in real time. The discussion on real-time control is muddied somewhat by newer telecommunications technology, enabling reliable, low latency, high speed communications over wide areas. Most differences between SCADA and DCS are culturally determined and can usually be ignored. As communication infrastructures with higher capacity become available, the difference between SCADA and DCS will fade.
* Industrial processes include those of manufacturing, production, power generation, fabrication, and refining, and may run in continuous, batch, repetitive, or discrete modes.
* Infrastructure processes may be public or private, and include water treatment and distribution, wastewater collection and treatment, oil and gas pipelines, electrical power transmission and distribution, civil defense siren systems, and large communication systems.
* Facility processes occur both in public facilities and private ones, including buildings, airports, ships, and space stations. They monitor and control HVAC, access, and energy consumption.
There is, in several industries, considerable confusion over the differences between SCADA systems and Distributed control systems (DCS). Generally speaking, a SCADA system usually refers to a system that coordinates, but does not control processes in real time. The discussion on real-time control is muddied somewhat by newer telecommunications technology, enabling reliable, low latency, high speed communications over wide areas. Most differences between SCADA and DCS are culturally determined and can usually be ignored. As communication infrastructures with higher capacity become available, the difference between SCADA and DCS will fade.
Control system
A control system is a device or set of devices to manage, command, direct or regulate the behavior of other devices or systems.
There are two common classes of control systems, with many variations and combinations: logic or sequential controls, and feedback or linear controls. There is also fuzzy logic, which attempts to combine some of the design simplicity of logic with the utility of linear control. Some devices or systems are inherently not controllable.
Modern day control engineering (also called control systems engineering) is a relatively new field of study that gained a significant attention during twentieth century with the advancement in technology. It can be broadly defined as practical application of control theory. Control engineering has an essential role in a wide range of control systems from a simple household washing machine to a complex high performance F-16 fighter aircraft. It allows one to understand a physical system in terms of its inputs, outputs and various components with different behaviors using mathematical modeling, control it in a desired manner with the controllers designed using control systems design tools, and implement the controller on the physical system employing available technology. A system can be mechanical, electrical, fluid, chemical, financial and even biological, and the mathematical modeling, analysis and controller design shall be done using control theory in one or many of the time, frequency and complex-s domains depending on the nature of the control system design problem.
intimate knowledge of the physical system being controlled is often desired.
Electrical circuits, digital signal processors and microcontrollers can all be used to implement Control systems. Control engineering has a wide range of applications from the flight and propulsion systems of commercial airliners to the cruise control present in many modern automobiles.
In most of the cases, control engineers utilize feedback when designing control systems. This is often accomplished using a PID controller system. For example, in an automobile with cruise control the vehicle's speed is continuously monitored and fed back to the system which adjusts the motor's torque accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback. In practically all such systems stability is important and control theory can help ensure stability is achieved.
Although feedback is an important aspect of control engineering, control engineers may also work on the control of systems without feedback. This is known as open loop control. A classic example of open loop control is a washing machine that runs through a pre-determined cycle without the use of sensors.
There are two common classes of control systems, with many variations and combinations: logic or sequential controls, and feedback or linear controls. There is also fuzzy logic, which attempts to combine some of the design simplicity of logic with the utility of linear control. Some devices or systems are inherently not controllable.
Modern day control engineering (also called control systems engineering) is a relatively new field of study that gained a significant attention during twentieth century with the advancement in technology. It can be broadly defined as practical application of control theory. Control engineering has an essential role in a wide range of control systems from a simple household washing machine to a complex high performance F-16 fighter aircraft. It allows one to understand a physical system in terms of its inputs, outputs and various components with different behaviors using mathematical modeling, control it in a desired manner with the controllers designed using control systems design tools, and implement the controller on the physical system employing available technology. A system can be mechanical, electrical, fluid, chemical, financial and even biological, and the mathematical modeling, analysis and controller design shall be done using control theory in one or many of the time, frequency and complex-s domains depending on the nature of the control system design problem.
intimate knowledge of the physical system being controlled is often desired.
Electrical circuits, digital signal processors and microcontrollers can all be used to implement Control systems. Control engineering has a wide range of applications from the flight and propulsion systems of commercial airliners to the cruise control present in many modern automobiles.
In most of the cases, control engineers utilize feedback when designing control systems. This is often accomplished using a PID controller system. For example, in an automobile with cruise control the vehicle's speed is continuously monitored and fed back to the system which adjusts the motor's torque accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback. In practically all such systems stability is important and control theory can help ensure stability is achieved.
Although feedback is an important aspect of control engineering, control engineers may also work on the control of systems without feedback. This is known as open loop control. A classic example of open loop control is a washing machine that runs through a pre-determined cycle without the use of sensors.
Automation
Automation is the use of control systems (such as numerical control, programmable logic control, and other industrial control systems), in concert with other applications of information technology (such as computer-aided technologies [CAD, CAM, CAx]), to control industrial machinery and processes, reducing the need for human intervention.[1] In the scope of industrialization, automation is a step beyond mechanization. Whereas mechanization provided human operators with machinery to assist them with the muscular requirements of work, automation greatly reduces the need for human sensory and mental requirements as well. Processes and systems can also be automated.
Automation plays an increasingly important role in the global economy and in daily experience. Engineers strive to combine automated devices with mathematical and organizational tools to create complex systems for a rapidly expanding range of applications and human activities.
Many roles for humans in industrial processes presently lie beyond the scope of automation. Human-level pattern recognition, language recognition, and language production ability are well beyond the capabilities of modern mechanical and computer systems. Tasks requiring subjective assessment or synthesis of complex sensory data, such as scents and sounds, as well as high-level tasks such as strategic planning, currently require human expertise. In many cases, the use of humans is more cost-effective than mechanical approaches even where automation of industrial tasks is possible.
Specialised hardened computers, referred to as programmable logic controllers (PLCs), are frequently used to synchronize the flow of inputs from (physical) sensors and events with the flow of outputs to actuators and events. This leads to precisely controlled actions that permit a tight control of almost any industrial process.
Human-machine interfaces (HMI) or computer human interfaces (CHI), formerly known as man-machine interfaces, are usually employed to communicate with PLCs and other computers, such as entering and monitoring temperatures or pressures for further automated control or emergency response. Service personnel who monitor and control these interfaces are often referred to as stationary engineers.
Building automation describes the functionality provided by the control system of a building. A building automation system (BAS) is an example of a distributed control system. The control system is a computerized, intelligent network of electronic devices, designed to monitor and control the mechanical and lighting systems in a building.
BAS core functionality keeps the building climate within a specified range, provides lighting based on an occupancy schedule, and monitors system performance and device failures and provides email and/or text notifications to building engineering staff. The BAS functionality reduces building energy and maintenance costs when compared to a non-controlled building. A building controlled by a BAS is often referred to as an intelligent building system.
Home automation (also called domotics) may designate an emerging practice of increased automation of household appliances and features in residential dwellings, particularly through electronic means that allow for things impracticable, overly expensive or simply not possible in recent past decades. The term may be used in contrast to the more mainstream "building automation," which refers to industrial settings and the automatic or semi-automatic control of lighting, climate doors and windows, and security and surveillance systems. The techniques employed in home automation include those in building automation as well as the control of home entertainment systems, houseplant watering, pet feeding, "scenes" for different events (such as dinners or parties), and the use of domestic robots.
Automation plays an increasingly important role in the global economy and in daily experience. Engineers strive to combine automated devices with mathematical and organizational tools to create complex systems for a rapidly expanding range of applications and human activities.
Many roles for humans in industrial processes presently lie beyond the scope of automation. Human-level pattern recognition, language recognition, and language production ability are well beyond the capabilities of modern mechanical and computer systems. Tasks requiring subjective assessment or synthesis of complex sensory data, such as scents and sounds, as well as high-level tasks such as strategic planning, currently require human expertise. In many cases, the use of humans is more cost-effective than mechanical approaches even where automation of industrial tasks is possible.
Specialised hardened computers, referred to as programmable logic controllers (PLCs), are frequently used to synchronize the flow of inputs from (physical) sensors and events with the flow of outputs to actuators and events. This leads to precisely controlled actions that permit a tight control of almost any industrial process.
Human-machine interfaces (HMI) or computer human interfaces (CHI), formerly known as man-machine interfaces, are usually employed to communicate with PLCs and other computers, such as entering and monitoring temperatures or pressures for further automated control or emergency response. Service personnel who monitor and control these interfaces are often referred to as stationary engineers.
Building automation describes the functionality provided by the control system of a building. A building automation system (BAS) is an example of a distributed control system. The control system is a computerized, intelligent network of electronic devices, designed to monitor and control the mechanical and lighting systems in a building.
BAS core functionality keeps the building climate within a specified range, provides lighting based on an occupancy schedule, and monitors system performance and device failures and provides email and/or text notifications to building engineering staff. The BAS functionality reduces building energy and maintenance costs when compared to a non-controlled building. A building controlled by a BAS is often referred to as an intelligent building system.
Home automation (also called domotics) may designate an emerging practice of increased automation of household appliances and features in residential dwellings, particularly through electronic means that allow for things impracticable, overly expensive or simply not possible in recent past decades. The term may be used in contrast to the more mainstream "building automation," which refers to industrial settings and the automatic or semi-automatic control of lighting, climate doors and windows, and security and surveillance systems. The techniques employed in home automation include those in building automation as well as the control of home entertainment systems, houseplant watering, pet feeding, "scenes" for different events (such as dinners or parties), and the use of domestic robots.
Oil refinery
An oil refinery is an industrial process plant where crude oil is processed and refined into more useful petroleum products, such as gasoline, diesel fuel, asphalt base, heating oil, kerosene and liquefied petroleum gas.[1][2] Oil refineries are typically large sprawling industrial complexes with extensive piping running throughout, carrying streams of fluids between large chemical processing units.
Raw or unprocessed crude oil is not generally useful. Although "light, sweet" (low viscosity, low sulfur) crude oil has been used directly as a burner fuel for steam vessel propulsion, the lighter elements form explosive vapors in the fuel tanks and are therefore hazardous, especially in warships. Instead, the hundreds of different hydrocarbon molecules in crude oil are separated in a refinery into components which can be used as fuels, lubricants, and as feedstock in petrochemical processes that manufacture such products as plastics, detergents, solvents, elastomers and fibers such as nylon and polyesters.
Petroleum fossil fuels are burned in internal combustion engines to provide power for ships, automobiles, aircraft engines, lawn mowers, chainsaws, and other machines. Different boiling points allow the hydrocarbons to be separated by distillation. Since the lighter liquid products are in great demand for use in internal combustion engines, a modern refinery will convert heavy hydrocarbons and lighter gaseous elements into these higher value products.
The refining process releases numerous different chemicals into the atmosphere; consequently, there are substantial air pollution emissions[7] and a notable odor normally accompanies the presence of a refinery. Aside from air pollution impacts there are also wastewater concerns,[3] risks of industrial accidents such as fire and explosion, and noise health effects due to industrial noise.
Raw or unprocessed crude oil is not generally useful. Although "light, sweet" (low viscosity, low sulfur) crude oil has been used directly as a burner fuel for steam vessel propulsion, the lighter elements form explosive vapors in the fuel tanks and are therefore hazardous, especially in warships. Instead, the hundreds of different hydrocarbon molecules in crude oil are separated in a refinery into components which can be used as fuels, lubricants, and as feedstock in petrochemical processes that manufacture such products as plastics, detergents, solvents, elastomers and fibers such as nylon and polyesters.
Petroleum fossil fuels are burned in internal combustion engines to provide power for ships, automobiles, aircraft engines, lawn mowers, chainsaws, and other machines. Different boiling points allow the hydrocarbons to be separated by distillation. Since the lighter liquid products are in great demand for use in internal combustion engines, a modern refinery will convert heavy hydrocarbons and lighter gaseous elements into these higher value products.
The refining process releases numerous different chemicals into the atmosphere; consequently, there are substantial air pollution emissions[7] and a notable odor normally accompanies the presence of a refinery. Aside from air pollution impacts there are also wastewater concerns,[3] risks of industrial accidents such as fire and explosion, and noise health effects due to industrial noise.
cosmetic
Cosmetics are substances used to enhance the appearance or odor of the human body. Cosmetics include skin-care creams, lotions, powders, perfumes, lipsticks, fingernail and toe nail polish, eye and facial makeup, permanent waves, colored contact lenses, hair colors, [hair sprays] [3]and gels, deodorants, baby products, bath oils, bubble baths, bath salts, butters and many other types of products. Their use is widespread, especially among women in Western countries. A subset of cosmetics is called "make-up," which refers primarily to colored products intended to alter the user’s appearance. Many manufacturers distinguish between decorative cosmetics and care cosmetics.
The manufacture of cosmetics is currently dominated by a small number of multinational corporations that originated in the early 20th century, but the distribution and sale of cosmetics is spread among a wide range of different businesses. The U.S. Food and Drug Administration (FDA) which regulates cosmetics in the United States[1] defines cosmetics as: "intended to be applied to the human body for cleansing, beautifying, promoting attractiveness, or altering the appearance without affecting the body's structure or functions." This broad definition includes, as well, any material intended for use as a component of a cosmetic product. The FDA specifically excludes soap from this category.
Plastic surgery is a medical specialty concerned with the correction or restoration of form and function. While famous for aesthetic surgery, plastic surgery also includes two main fields: body modification and reconstructive surgery. The word "plastic" derives from the Greek plastikos meaning to mold or to shape; its use here is not connected with the synthetic polymer material known as plastic.
The manufacture of cosmetics is currently dominated by a small number of multinational corporations that originated in the early 20th century, but the distribution and sale of cosmetics is spread among a wide range of different businesses. The U.S. Food and Drug Administration (FDA) which regulates cosmetics in the United States[1] defines cosmetics as: "intended to be applied to the human body for cleansing, beautifying, promoting attractiveness, or altering the appearance without affecting the body's structure or functions." This broad definition includes, as well, any material intended for use as a component of a cosmetic product. The FDA specifically excludes soap from this category.
Plastic surgery is a medical specialty concerned with the correction or restoration of form and function. While famous for aesthetic surgery, plastic surgery also includes two main fields: body modification and reconstructive surgery. The word "plastic" derives from the Greek plastikos meaning to mold or to shape; its use here is not connected with the synthetic polymer material known as plastic.
perfume
Perfume is a mixture of fragrant essential oils and aroma compounds, fixatives, and solvents used to give the human body, animals, objects, and living spaces a "pleasant" smell..Prince Matchabelli is a perfume line. It was first designed by Prince Georges V. Matchabelli who was an amateur chemist. Georges Matchabelli was a Georgian prince and Georgian ambassador to Italy, but fled the Soviet Union and immigrated to the United States after the Russian Revolution. In New York City he and his wife, Princess Norina Matchabelli (an actress whose stage name was Maria Carmi), opened a small antiques shop Le Rouge et le Noir at 545 Madison Avenue. The name came from Stendhal's novel, red for aristocracy (Matchabelli's origins) and black for clergy (The Miracle, a famous religious play in which Norina had starred). They later established the Prince Matchabelli Perfume Company in 1926. Perfumes were personally blended for clients by Prince Matchabelli. The first three perfumes were Princess Norina, Queen of Georgia and Ave Maria. The company became known for the many color-coded, crown-shaped bottles designed by Norina after the Matchabelli crown and introduced in 1928 with labels on the underside.
Houbigant was a perfume manufacturer founded in Paris, France in 1775 by Jean-François Houbigant of Grasse (1752-1807), originally selling gloves, perfumes, and bridal bouquets. The original shop, called "A la Corbeille de Fleurs", was in the rue du Faubourg Saint-Honoré. Clients included Queen Marie-Antoinette of France; two French emperors; Princess Adélaïde d'Orléans (1829); Princess Dagmar of Denmark, wife of emperor Alexander III of Russia (1890); Madame Du Barry, mistress of King Louis XV of France; and Queen Victoria of England.
Houbigant was a perfume manufacturer founded in Paris, France in 1775 by Jean-François Houbigant of Grasse (1752-1807), originally selling gloves, perfumes, and bridal bouquets. The original shop, called "A la Corbeille de Fleurs", was in the rue du Faubourg Saint-Honoré. Clients included Queen Marie-Antoinette of France; two French emperors; Princess Adélaïde d'Orléans (1829); Princess Dagmar of Denmark, wife of emperor Alexander III of Russia (1890); Madame Du Barry, mistress of King Louis XV of France; and Queen Victoria of England.
Sunglasses
Sunglasses or sun glasses are a form of protective eyewear that usually enclose or protect the eye pupil in order to prevent strong light, ultraviolet (UV) rays, and increasingly, blue light ("blue blocking") from penetrating. They can sometimes also function as a visual aid, as variously termed spectacles or glasses exist which feature lenses that are colored, polarized or darkened. In the early 20th century they were also known as sun cheaters (cheaters being an American slang term for glasses).[1]
Many people find direct sunlight too bright for comfort. During outdoor activities, the human eye can receive more light than usual. Healthcare professionals recommend eye protection whenever outside to protect the eyes from ultraviolet radiation and blue light, which can cause several serious eye problems. Sunglasses have long been associated with celebrities and film actors primarily from a desire to hide or mask their identity. Since the 1940s sunglasses have been popular as a fashion accessory, especially on the beach.
Peoria (named after the Peoria tribe) is the largest city on the Illinois River and the county seat of Peoria County,[1] Illinois, in the United States. As of the 2000 census, the city was the fifth-largest in Illinois, with a population of 112,936; by 2007 it was the sixth-largest city and had population of 113,546.[2] The Peoria Metropolitan Statistical Area had a population of 372,487 in 2008, making it the third largest metropolitan area in the state after Chicagoland and the Metro-East portion of the St. Louis metropolitan area.
Photochromic lenses are lenses that darken on exposure to ultraviolet (UV) radiation. Once the UV is removed (for example by walking indoors), the lenses will gradually return to their clear state. Photochromic lenses may be made of glass, polycarbonate, or another plastic. The glass version of this type of lenses was first developed by Corning in the 1960s. More recently, plastic versions of these lenses have been commercialized. The first of these was the Photolite lens sold in the early 1980s by American Optical Corporation. But the first commercially successful plastic photochromic lens was introduced by Transitions Optical in 1991.[1]
The glass version of these lenses achieve their photochromic properties through the embedding of microcrystalline silver halides (usually silver chloride), or molecules in a glass substrate. Plastic photochromic lenses rely on organic photochromic molecules (for example oxazines and naphthopyrans) to achieve the reversible darkening effect. The reason these lenses darken in sunlight but not indoors under artificial light, is that room light does not contain the UV (short wavelength light) found in sunlight. Automobile windows also block UV so these lenses would darken less in a car. Lenses that darken in response to visible (rather than UV) light would avoid these issues, but they are not feasible for most applications. In order to respond to light, it is necessary to absorb it, thus the glass could not be made to be clear in its low-light state. This correctly implies photochromic lenses are not entirely transparent, specifically they filter out UV light. This does not represent a problem, because the human eye does not see in the UV spectrum.
Many people find direct sunlight too bright for comfort. During outdoor activities, the human eye can receive more light than usual. Healthcare professionals recommend eye protection whenever outside to protect the eyes from ultraviolet radiation and blue light, which can cause several serious eye problems. Sunglasses have long been associated with celebrities and film actors primarily from a desire to hide or mask their identity. Since the 1940s sunglasses have been popular as a fashion accessory, especially on the beach.
Peoria (named after the Peoria tribe) is the largest city on the Illinois River and the county seat of Peoria County,[1] Illinois, in the United States. As of the 2000 census, the city was the fifth-largest in Illinois, with a population of 112,936; by 2007 it was the sixth-largest city and had population of 113,546.[2] The Peoria Metropolitan Statistical Area had a population of 372,487 in 2008, making it the third largest metropolitan area in the state after Chicagoland and the Metro-East portion of the St. Louis metropolitan area.
Photochromic lenses are lenses that darken on exposure to ultraviolet (UV) radiation. Once the UV is removed (for example by walking indoors), the lenses will gradually return to their clear state. Photochromic lenses may be made of glass, polycarbonate, or another plastic. The glass version of this type of lenses was first developed by Corning in the 1960s. More recently, plastic versions of these lenses have been commercialized. The first of these was the Photolite lens sold in the early 1980s by American Optical Corporation. But the first commercially successful plastic photochromic lens was introduced by Transitions Optical in 1991.[1]
The glass version of these lenses achieve their photochromic properties through the embedding of microcrystalline silver halides (usually silver chloride), or molecules in a glass substrate. Plastic photochromic lenses rely on organic photochromic molecules (for example oxazines and naphthopyrans) to achieve the reversible darkening effect. The reason these lenses darken in sunlight but not indoors under artificial light, is that room light does not contain the UV (short wavelength light) found in sunlight. Automobile windows also block UV so these lenses would darken less in a car. Lenses that darken in response to visible (rather than UV) light would avoid these issues, but they are not feasible for most applications. In order to respond to light, it is necessary to absorb it, thus the glass could not be made to be clear in its low-light state. This correctly implies photochromic lenses are not entirely transparent, specifically they filter out UV light. This does not represent a problem, because the human eye does not see in the UV spectrum.
Power line communication
Power line communication or power line carrier (PLC), also known as Power line Digital Subscriber Line (PDSL), mains communication, power line telecom (PLT), or power line networking (PLN), is a system for carrying data on a conductor also used for electric power transmission. Broadband over Power Lines (BPL) uses PLC by sending and receiving information bearing signals over power lines to provide access to the Internet.
Electrical power is transmitted over high voltage transmission lines, distributed over medium voltage, and used inside buildings at lower voltages. Powerline communications can be applied at each stage. Most PLC technologies limit themselves to one set of wires (for example, premises wiring), but some can cross between two levels (for example, both the distribution network and premises wiring).
All power line communications systems operate by impressing a modulated carrier signal on the wiring system. Different types of powerline communications use different frequency bands, depending on the signal transmission characteristics of the power wiring used. Since the power wiring system was originally intended for transmission of AC power, in conventional use, the power wire circuits have only a limited ability to carry higher frequencies. The propagation problem is a limiting factor for each type of power line communications. A new discovery called E-Line that allows a single power conductor on an overhead power line to operate as a waveguide to provide low attenuation propagation of RF through microwave energy lines while providing information rate of multiple Gbps is an exception to this limitation.
Data rates over a power line communication system vary widely. Low-frequency (about 100-200 kHz) carriers impressed on high-voltage transmission lines may carry one or two analog voice circuits, or telemetry and control circuits with an equivalent data rate of a few hundred bits per second; however, these circuits may be many miles long. Higher data rates generally imply shorter ranges; a local area network operating at millions of bits per second may only cover one floor of an office building, but eliminates installation of dedicated network cabling.
Broadband over power lines (BPL), also known as power-line Internet or powerband, is the use of PLC technology to provide broadband Internet access through ordinary power lines. A computer (or any other device) would need only to plug a BPL "modem" into any outlet in an equipped building to have high-speed Internet access. International Broadband Electric Communications or IBEC and other companies currently offer BPL service to several electric cooperatives.
BPL may offer benefits over regular cable or DSL connections: the extensive infrastructure already available appears to allow people in remote locations to access the Internet with relatively little equipment investment by the utility. Also, such ubiquitous availability would make it much easier for other electronics, such as televisions or sound systems, to hook up.
But variations in the physical characteristics of the electricity network and the current lack of IEEE standards mean that provisioning of the service is far from being a standard, repeatable process. And, the amount of bandwidth a BPL system can provide compared to cable and wireless is in question. The prospect of BPL could motivate DSL and cable operators to more quickly serve rural communities. [5]
PLC modems transmit in medium and high frequency (1.6 to 80 MHz electric carrier). The asymmetric speed in the modem is generally from 256 kbit/s to 2.7 Mbit/s. In the repeater situated in the meter room the speed is up to 45 Mbit/s and can be connected to 256 PLC modems. In the medium voltage stations, the speed from the head ends to the Internet is up to 135 Mbit/s. To connect to the Internet, utilities can use optical fiber backbone or wireless link.
The system has a number of issues. The primary one is that power lines are inherently a very noisy environment. Every time a device turns on or off, it introduces a pop or click into the line. Energy-saving devices often introduce noisy harmonics into the line. The system must be designed to deal with these natural signaling disruptions and work around them.
Broadband over power lines has developed faster in Europe than in the United States due to a historical difference in power system design philosophies. Power distribution uses step-down transformers to reduce the voltage for use by customers. But BPL signals cannot readily pass through transformers, as their high inductance makes them act as low-pass filters, blocking high-frequency signals. So, repeaters must be attached to the transformers. In the U.S., it is common for a small transformer hung from a utility pole to service a single house or a small number of houses. In Europe, it is more common for a somewhat larger transformer to service 10 or 100 houses. For delivering power to customers, this difference in design makes little difference for power distribution. But for delivering BPL over the power grid in a typical U.S. city requires an order of magnitude more repeaters than in a comparable European city. On the other hand, since bandwidth to the transformer is limited, this can increase the speed at which each household can connect, due to fewer people sharing the same line. One possible solution is to use BPL as the backhaul for wireless communications, for instance by hanging Wi-Fi access points or cellphone base stations on utility poles, thus allowing end-users within a certain range to connect with equipment they already have.
Electrical power is transmitted over high voltage transmission lines, distributed over medium voltage, and used inside buildings at lower voltages. Powerline communications can be applied at each stage. Most PLC technologies limit themselves to one set of wires (for example, premises wiring), but some can cross between two levels (for example, both the distribution network and premises wiring).
All power line communications systems operate by impressing a modulated carrier signal on the wiring system. Different types of powerline communications use different frequency bands, depending on the signal transmission characteristics of the power wiring used. Since the power wiring system was originally intended for transmission of AC power, in conventional use, the power wire circuits have only a limited ability to carry higher frequencies. The propagation problem is a limiting factor for each type of power line communications. A new discovery called E-Line that allows a single power conductor on an overhead power line to operate as a waveguide to provide low attenuation propagation of RF through microwave energy lines while providing information rate of multiple Gbps is an exception to this limitation.
Data rates over a power line communication system vary widely. Low-frequency (about 100-200 kHz) carriers impressed on high-voltage transmission lines may carry one or two analog voice circuits, or telemetry and control circuits with an equivalent data rate of a few hundred bits per second; however, these circuits may be many miles long. Higher data rates generally imply shorter ranges; a local area network operating at millions of bits per second may only cover one floor of an office building, but eliminates installation of dedicated network cabling.
Broadband over power lines (BPL), also known as power-line Internet or powerband, is the use of PLC technology to provide broadband Internet access through ordinary power lines. A computer (or any other device) would need only to plug a BPL "modem" into any outlet in an equipped building to have high-speed Internet access. International Broadband Electric Communications or IBEC and other companies currently offer BPL service to several electric cooperatives.
BPL may offer benefits over regular cable or DSL connections: the extensive infrastructure already available appears to allow people in remote locations to access the Internet with relatively little equipment investment by the utility. Also, such ubiquitous availability would make it much easier for other electronics, such as televisions or sound systems, to hook up.
But variations in the physical characteristics of the electricity network and the current lack of IEEE standards mean that provisioning of the service is far from being a standard, repeatable process. And, the amount of bandwidth a BPL system can provide compared to cable and wireless is in question. The prospect of BPL could motivate DSL and cable operators to more quickly serve rural communities. [5]
PLC modems transmit in medium and high frequency (1.6 to 80 MHz electric carrier). The asymmetric speed in the modem is generally from 256 kbit/s to 2.7 Mbit/s. In the repeater situated in the meter room the speed is up to 45 Mbit/s and can be connected to 256 PLC modems. In the medium voltage stations, the speed from the head ends to the Internet is up to 135 Mbit/s. To connect to the Internet, utilities can use optical fiber backbone or wireless link.
The system has a number of issues. The primary one is that power lines are inherently a very noisy environment. Every time a device turns on or off, it introduces a pop or click into the line. Energy-saving devices often introduce noisy harmonics into the line. The system must be designed to deal with these natural signaling disruptions and work around them.
Broadband over power lines has developed faster in Europe than in the United States due to a historical difference in power system design philosophies. Power distribution uses step-down transformers to reduce the voltage for use by customers. But BPL signals cannot readily pass through transformers, as their high inductance makes them act as low-pass filters, blocking high-frequency signals. So, repeaters must be attached to the transformers. In the U.S., it is common for a small transformer hung from a utility pole to service a single house or a small number of houses. In Europe, it is more common for a somewhat larger transformer to service 10 or 100 houses. For delivering power to customers, this difference in design makes little difference for power distribution. But for delivering BPL over the power grid in a typical U.S. city requires an order of magnitude more repeaters than in a comparable European city. On the other hand, since bandwidth to the transformer is limited, this can increase the speed at which each household can connect, due to fewer people sharing the same line. One possible solution is to use BPL as the backhaul for wireless communications, for instance by hanging Wi-Fi access points or cellphone base stations on utility poles, thus allowing end-users within a certain range to connect with equipment they already have.
safety
Industrial safety systems are crucial in any hazardous plants such as oil and gas plants and nuclear plants. They are used to protect human, plant, and environment in case the process went beyond the control margins. As the name suggests, these systems are not intended for controlling the process itself but rather protection. Process control is performed by means of process control systems (PCS) and is interlocked by the safety systems so that immediate actions are taken should the process control systems fail.
Process control and safety systems are usually merged under one system, called integrated control and safety system, (ICSS). Industrial safety systems typically use dedicated systems that are SIL 2 certified at minimum; whereas control systems can start with SIL 1. SIL applies to both hardware and software requirements such as cards, processors redundancy and voting functions.
Process control and safety systems are usually merged under one system, called integrated control and safety system, (ICSS). Industrial safety systems typically use dedicated systems that are SIL 2 certified at minimum; whereas control systems can start with SIL 1. SIL applies to both hardware and software requirements such as cards, processors redundancy and voting functions.
plc
A programmable logic controller (PLC) or programmable controller is a digital computer used for automation of electromechanical processes, such as control of machinery on factory assembly lines, amusement rides, or lighting fixtures. PLCs are used in many industries and machines. Unlike general-purpose computers, the PLC is designed for multiple inputs and output arrangements, extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-backed or non-volatile memory. A PLC is an example of a real time system since output results must be produced in response to input conditions within a bounded time, otherwise unintended operation will result.
he PLC was invented in response to the needs of the American automotive manufacturing industry. Programmable controllers were initially adopted by the automotive industry where software revision replaced the re-wiring of hard-wired control panels when production models changed.
Before the PLC, control, sequencing, and safety interlock logic for manufacturing automobiles was accomplished using hundreds or thousands of relays, cam timers, and drum sequencers and dedicated closed-loop controllers. The process for updating such facilities for the yearly model change-over was very time consuming and expensive, as electricians needed to individually rewire each and every relay.
In 1968 GM Hydramatic (the automatic transmission division of General Motors) issued a request for proposal for an electronic replacement for hard-wired relay systems.
The winning proposal came from Bedford Associates of Bedford, Massachusetts. The first PLC, designated the 084 because it was Bedford Associates' eighty-fourth project, was the result. Bedford Associates started a new company dedicated to developing, manufacturing, selling, and servicing this new product: Modicon, which stood for MOdular DIgital CONtroller. One of the people who worked on that project was Dick Morley, who is considered to be the "father" of the PLC. The Modicon brand was sold in 1977 to Gould Electronics, and later acquired by German Company AEG and then by French Schneider Electric, the current owner.
One of the very first 084 models built is now on display at Modicon's headquarters in North Andover, Massachusetts. It was presented to Modicon by GM, when the unit was retired after nearly twenty years of uninterrupted service. Modicon used the 84 moniker at the end of its product range until the 984 made its appearance.
The automotive industry is still one of the largest users of PLCs.
he PLC was invented in response to the needs of the American automotive manufacturing industry. Programmable controllers were initially adopted by the automotive industry where software revision replaced the re-wiring of hard-wired control panels when production models changed.
Before the PLC, control, sequencing, and safety interlock logic for manufacturing automobiles was accomplished using hundreds or thousands of relays, cam timers, and drum sequencers and dedicated closed-loop controllers. The process for updating such facilities for the yearly model change-over was very time consuming and expensive, as electricians needed to individually rewire each and every relay.
In 1968 GM Hydramatic (the automatic transmission division of General Motors) issued a request for proposal for an electronic replacement for hard-wired relay systems.
The winning proposal came from Bedford Associates of Bedford, Massachusetts. The first PLC, designated the 084 because it was Bedford Associates' eighty-fourth project, was the result. Bedford Associates started a new company dedicated to developing, manufacturing, selling, and servicing this new product: Modicon, which stood for MOdular DIgital CONtroller. One of the people who worked on that project was Dick Morley, who is considered to be the "father" of the PLC. The Modicon brand was sold in 1977 to Gould Electronics, and later acquired by German Company AEG and then by French Schneider Electric, the current owner.
One of the very first 084 models built is now on display at Modicon's headquarters in North Andover, Massachusetts. It was presented to Modicon by GM, when the unit was retired after nearly twenty years of uninterrupted service. Modicon used the 84 moniker at the end of its product range until the 984 made its appearance.
The automotive industry is still one of the largest users of PLCs.
control valves
Control valves are valves used to control conditions such as flow, pressure, temperature, and liquid level by fully or partially opening or closing in response to signals received from controllers that compare a "setpoint" to a "process variable" whose value is provided by sensors that monitor changes in such conditions.[1]
The opening or closing of control valves is done by means of electrical, hydraulic or pneumatic systems. Positoners are used to control the opening or closing of the actuator based on Electric, or Pnuematic Signals. These control signals, traditionally based on 3-15psi (0.2-1.0bar), more common now are 4-20mA signals for industry, 0-10V for HVAC systems, & the introduction of "Smart" systems, HART, Fieldbus Foundation, & Profibus being the more common protocols.
A valve is a device that regulates the flow of a fluid (gases, fluidized solids, slurries, or liquids) by opening, closing, or partially obstructing various passageways. Valves are technically pipe fittings, but are usually discussed as a separate category.
Valves are also found in the human body. For example, there are several which control the flow of blood in the chambers of the heart and maintain the correct pumping action (see heart valve article).
Valves are used in a variety of contexts, including industrial, military, commercial, residential, and transportation.
Oil and gas, power generation, mining, water reticulation, sewerage and chemical manufacturing are the industries in which the majority of valves are used.
Plumbing valves, such as taps for hot and cold water are the most noticeable types of valves. Other valves encountered on a daily basis include gas control valves on cookers and barbecues, small valves fitted to washing machines and dishwashers, and safety devices fitted to hot water systems.
Valves may be operated manually, either by a hand wheel, lever or pedal. Valves may also be automatic, driven by changes in pressure, temperature or flow. These changes may act upon a diaphram or a piston which in turn activates the valve, examples of this type of valve found commonly are safety valves fitted to hot water systems or steam boilers.
More complex control systems using valves requiring automatic control based on an external input (i.e., regulating flow through a pipe to a changing set point) require an actuator. An actuator will stroke the valve depending on its input and set-up, allowing the valve to be positioned accurately, and allowing control over a variety of requirements.
Valves are also found in the Otto cycle (internal combustion) engines driven by a camshaft, lifters and or push rods where they play a major role in engine cycle control.
Contents
The opening or closing of control valves is done by means of electrical, hydraulic or pneumatic systems. Positoners are used to control the opening or closing of the actuator based on Electric, or Pnuematic Signals. These control signals, traditionally based on 3-15psi (0.2-1.0bar), more common now are 4-20mA signals for industry, 0-10V for HVAC systems, & the introduction of "Smart" systems, HART, Fieldbus Foundation, & Profibus being the more common protocols.
A valve is a device that regulates the flow of a fluid (gases, fluidized solids, slurries, or liquids) by opening, closing, or partially obstructing various passageways. Valves are technically pipe fittings, but are usually discussed as a separate category.
Valves are also found in the human body. For example, there are several which control the flow of blood in the chambers of the heart and maintain the correct pumping action (see heart valve article).
Valves are used in a variety of contexts, including industrial, military, commercial, residential, and transportation.
Oil and gas, power generation, mining, water reticulation, sewerage and chemical manufacturing are the industries in which the majority of valves are used.
Plumbing valves, such as taps for hot and cold water are the most noticeable types of valves. Other valves encountered on a daily basis include gas control valves on cookers and barbecues, small valves fitted to washing machines and dishwashers, and safety devices fitted to hot water systems.
Valves may be operated manually, either by a hand wheel, lever or pedal. Valves may also be automatic, driven by changes in pressure, temperature or flow. These changes may act upon a diaphram or a piston which in turn activates the valve, examples of this type of valve found commonly are safety valves fitted to hot water systems or steam boilers.
More complex control systems using valves requiring automatic control based on an external input (i.e., regulating flow through a pipe to a changing set point) require an actuator. An actuator will stroke the valve depending on its input and set-up, allowing the valve to be positioned accurately, and allowing control over a variety of requirements.
Valves are also found in the Otto cycle (internal combustion) engines driven by a camshaft, lifters and or push rods where they play a major role in engine cycle control.
Contents
textiles
A textile is a flexible material consisting of a network of natural or artificial fibres often referred to as thread or yarn. Yarn is produced by spinning raw wool fibres, linen, cotton, or other material on a spinning wheel to produce long strands.[1] Textiles are formed by weaving, knitting, crocheting, knotting, or pressing fibres together (felt).
The words fabric and cloth are used in textile assembly trades (such as tailoring and dressmaking) as synonyms for textile. However, there are subtle differences in these terms in specialized usage. Textile refers to any material made of interlacing fibres. Fabric refers to any material made through weaving, knitting, crocheting, or bonding. Cloth refers to a finished piece of fabric that can be used for a purpose such as covering a bed.
Textiles have an assortment of uses, the most common of which are for clothing and containers such as bags and baskets. In the household, they are used in carpeting, upholstered furnishings, window shades, towels, covering for tables, beds, and other flat surfaces, and in art. In the workplace, they are used in industrial and scientific processes such as filtering. Miscellaneous uses include flags, backpacks, tents, nets, cleaning devices such as handkerchiefs and rags, transportation devices such as balloons, kites, sails, and parachutes; and strengthening in composite materials such as fibreglass and industrial geotextiles. Children can learn using textiles to make collages, sew, quilt, and make toys.
Textiles used for industrial purposes, and chosen for characteristics other than their appearance, are commonly referred to as technical textiles. Technical textiles include textile structures for automotive applications, medical textiles (e.g. implants), geotextiles (reinforcement of embankments), agrotextiles (textiles for crop protection), protective clothing (e.g. against heat and radiation for fire fighter clothing, against molten metals for welders, stab protection, and bullet proof vests). In all these applications stringent performance requirements must be met. Woven of threads coated with zinc oxide nanowires, laboratory fabric has been shown capable of "self-powering nanosystems" using vibrations created by everyday actions like wind or body movements
Plant textiles
Grass, rush, hemp, and sisal are all used in making rope. In the first two, the entire plant is used for this purpose, while in the last two, only fibres from the plant are utilized. Coir (coconut fibre) is used in making twine, and also in floormats, doormats, brushes, mattresses, floor tiles, and sacking.
Straw and bamboo are both used to make hats. Straw, a dried form of grass, is also used for stuffing, as is kapok.
Fibres from pulpwood trees, cotton, rice, hemp, and nettle are used in making paper.
Cotton, flax, jute, hemp, modal and even bamboo fibre are all used in clothing. Piña (pineapple fibre) and ramie are also fibres used in clothing, generally with a blend of other fibres such as cotton.
Acetate is used to increase the shininess of certain fabrics such as silks, velvets, and taffetas.
Seaweed is used in the production of textiles. A water-soluble fibre known as alginate is produced and is used as a holding fibre; when the cloth is finished, the alginate is dissolved, leaving an open area
Lyocell is a man-made fabric derived from wood pulp. It is often described as a man-made silk equivalent and is a tough fabric which is often blended with other fabrics - cotton for example.
[edit] Mineral textiles
The words fabric and cloth are used in textile assembly trades (such as tailoring and dressmaking) as synonyms for textile. However, there are subtle differences in these terms in specialized usage. Textile refers to any material made of interlacing fibres. Fabric refers to any material made through weaving, knitting, crocheting, or bonding. Cloth refers to a finished piece of fabric that can be used for a purpose such as covering a bed.
Textiles have an assortment of uses, the most common of which are for clothing and containers such as bags and baskets. In the household, they are used in carpeting, upholstered furnishings, window shades, towels, covering for tables, beds, and other flat surfaces, and in art. In the workplace, they are used in industrial and scientific processes such as filtering. Miscellaneous uses include flags, backpacks, tents, nets, cleaning devices such as handkerchiefs and rags, transportation devices such as balloons, kites, sails, and parachutes; and strengthening in composite materials such as fibreglass and industrial geotextiles. Children can learn using textiles to make collages, sew, quilt, and make toys.
Textiles used for industrial purposes, and chosen for characteristics other than their appearance, are commonly referred to as technical textiles. Technical textiles include textile structures for automotive applications, medical textiles (e.g. implants), geotextiles (reinforcement of embankments), agrotextiles (textiles for crop protection), protective clothing (e.g. against heat and radiation for fire fighter clothing, against molten metals for welders, stab protection, and bullet proof vests). In all these applications stringent performance requirements must be met. Woven of threads coated with zinc oxide nanowires, laboratory fabric has been shown capable of "self-powering nanosystems" using vibrations created by everyday actions like wind or body movements
Plant textiles
Grass, rush, hemp, and sisal are all used in making rope. In the first two, the entire plant is used for this purpose, while in the last two, only fibres from the plant are utilized. Coir (coconut fibre) is used in making twine, and also in floormats, doormats, brushes, mattresses, floor tiles, and sacking.
Straw and bamboo are both used to make hats. Straw, a dried form of grass, is also used for stuffing, as is kapok.
Fibres from pulpwood trees, cotton, rice, hemp, and nettle are used in making paper.
Cotton, flax, jute, hemp, modal and even bamboo fibre are all used in clothing. Piña (pineapple fibre) and ramie are also fibres used in clothing, generally with a blend of other fibres such as cotton.
Acetate is used to increase the shininess of certain fabrics such as silks, velvets, and taffetas.
Seaweed is used in the production of textiles. A water-soluble fibre known as alginate is produced and is used as a holding fibre; when the cloth is finished, the alginate is dissolved, leaving an open area
Lyocell is a man-made fabric derived from wood pulp. It is often described as a man-made silk equivalent and is a tough fabric which is often blended with other fabrics - cotton for example.
[edit] Mineral textiles
nano processors
Early CPUs were custom-designed as a part of a larger, sometimes one-of-a-kind, computer. However, this costly method of designing custom CPUs for a particular application has largely given way to the development of mass-produced processors that are made for one or many purposes. This standardization trend generally began in the era of discrete transistor mainframes and minicomputers and has rapidly accelerated with the popularization of the integrated circuit (IC). The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of these digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in everything from automobiles to cell phones and children's toys.
While the complexity, size, construction, and general form of CPUs have changed drastically over the past sixty years, it is notable that the basic design and function has not changed much at all. Almost all common CPUs today can be very accurately described as von Neumann stored-program machines. As the aforementioned Moore's law continues to hold true, concerns have arisen about the limits of integrated circuit transistor technology. Extreme miniaturization of electronic gates is causing the effects of phenomena like electromigration and subthreshold leakage to become much more significant. These newer concerns are among the many factors causing researchers to investigate new methods of computing such as the quantum computer, as well as to expand the usage of parallelism and other methods that extend the usefulness of the classical von Neumann model.
While the complexity, size, construction, and general form of CPUs have changed drastically over the past sixty years, it is notable that the basic design and function has not changed much at all. Almost all common CPUs today can be very accurately described as von Neumann stored-program machines. As the aforementioned Moore's law continues to hold true, concerns have arisen about the limits of integrated circuit transistor technology. Extreme miniaturization of electronic gates is causing the effects of phenomena like electromigration and subthreshold leakage to become much more significant. These newer concerns are among the many factors causing researchers to investigate new methods of computing such as the quantum computer, as well as to expand the usage of parallelism and other methods that extend the usefulness of the classical von Neumann model.
Future Nanotechnology Scope
Visions of self-replicating nanomachines that could devour the Earth in a "grey goo" are probably wide of the mark, but "radical nanotechnology" could still deliver great benefits to society. The question is how best to achieve this goal
Today nanotechnology is still in a formative phase--not unlike the condition of computer science in the 1960s or biotechnology in the 1980s. Yet it is maturing rapidly. Between 1997 and 2005, investment in nanotech research and development by governments around the world soared from $432 million to about $4.1 billion, and corresponding industry investment exceeded that of governments by 2005. By 2015, products incorporating nanotech will contribute approximately $1 trillion to the global economy. About two million workers will be employed in nanotech industries, and three times that many will have supporting jobs
Over time, therefore, nanotechnology should benefit every industrial sector and health care field. It should also help the environment through more efficient use of resources and better methods of pollution control. Nanotech does, however, pose new challenges to risk governance as well. Internationally, more needs to be done to collect the scientific information needed to resolve the ambiguities and to install the proper regulatory oversight. Helping the public to perceive nanotech soberly in a big picture that retains human values and quality of life will also be essential for this powerful new discipline to live up to its astonishing potential.
Today nanotechnology is still in a formative phase--not unlike the condition of computer science in the 1960s or biotechnology in the 1980s. Yet it is maturing rapidly. Between 1997 and 2005, investment in nanotech research and development by governments around the world soared from $432 million to about $4.1 billion, and corresponding industry investment exceeded that of governments by 2005. By 2015, products incorporating nanotech will contribute approximately $1 trillion to the global economy. About two million workers will be employed in nanotech industries, and three times that many will have supporting jobs
Over time, therefore, nanotechnology should benefit every industrial sector and health care field. It should also help the environment through more efficient use of resources and better methods of pollution control. Nanotech does, however, pose new challenges to risk governance as well. Internationally, more needs to be done to collect the scientific information needed to resolve the ambiguities and to install the proper regulatory oversight. Helping the public to perceive nanotech soberly in a big picture that retains human values and quality of life will also be essential for this powerful new discipline to live up to its astonishing potential.
computerss nano
Dell is a company that makes laptop and desktop computers and computer accessories. It is named after Michael Dell, the CEO and creator of the company. Dell makes computers for businesses and home users, and they also make computer monitors and Printers. They used to make portable music players, called the Dell DJ, and PDAs too.
Their company is in Round Rock, Texas. In 2006, they employed over 78,000 people. Some of their computers come with Linux. Others come with Microsoft Windows.
HP Pavilion is a line of personal computers produced by Hewlett-Packard and introduced in 1995. The name is applied to both desktops and laptops for the Home and Home Office product range.
When HP merged with Compaq in 2002, it took over Compaq's existing naming rights agreement. As a result, HP sells both HP and Compaq-branded machines. Computers can be ordered either directly from the factory or over the phone, and can be customized through choosing desired specifications. This is known as a CTO (formerly BTO) option.
Acer Incorporated (LSE: ACID, TSE: 2353) (traditional Chinese: 宏碁股份有限公司) is a Taiwan-based multinational electronics manufacturer. Its product lineup includes desktops and laptops, as well as personal digital assistants (PDAs), servers and storage, displays, peripherals, and e-business services for business, government, education, and home users.
Acer is the largest manufacturer of laptop computers[2], and the second largest computer manufacturer in the world behind HP[3]. The company also owns the largest franchised computer retail chain in Taipei, Taiwan.[4].
Toshiba Corporation (Japanese: 株式会社東芝 Kabushiki-gaisha Tōshiba?) (TYO: 6502) (pronounced: Toe-SHE-buh) is a Japanese multinational conglomerate manufacturing company, headquartered in Tokyo, Japan. The company's main business is in infrastructure, consumer products, electronic devices and components.
Toshiba-made Semiconductors are among the Worldwide Top 20 Semiconductor Sales Leaders. Toshiba is the world's fifth largest personal computer manufacturer, after Hewlett-Packard and Dell of the U.S., Acer of Taiwan and Lenovo of China.[2]
Toshiba, a world leader in high technology, is a diversified manufacturer and marketer of advanced electronic and electrical products, spanning information & communications equipment and systems, Internet-based solutions and services, electronic components and materials, power systems, industrial and social infrastructure systems, and household appliances.
Their company is in Round Rock, Texas. In 2006, they employed over 78,000 people. Some of their computers come with Linux. Others come with Microsoft Windows.
HP Pavilion is a line of personal computers produced by Hewlett-Packard and introduced in 1995. The name is applied to both desktops and laptops for the Home and Home Office product range.
When HP merged with Compaq in 2002, it took over Compaq's existing naming rights agreement. As a result, HP sells both HP and Compaq-branded machines. Computers can be ordered either directly from the factory or over the phone, and can be customized through choosing desired specifications. This is known as a CTO (formerly BTO) option.
Acer Incorporated (LSE: ACID, TSE: 2353) (traditional Chinese: 宏碁股份有限公司) is a Taiwan-based multinational electronics manufacturer. Its product lineup includes desktops and laptops, as well as personal digital assistants (PDAs), servers and storage, displays, peripherals, and e-business services for business, government, education, and home users.
Acer is the largest manufacturer of laptop computers[2], and the second largest computer manufacturer in the world behind HP[3]. The company also owns the largest franchised computer retail chain in Taipei, Taiwan.[4].
Toshiba Corporation (Japanese: 株式会社東芝 Kabushiki-gaisha Tōshiba?) (TYO: 6502) (pronounced: Toe-SHE-buh) is a Japanese multinational conglomerate manufacturing company, headquartered in Tokyo, Japan. The company's main business is in infrastructure, consumer products, electronic devices and components.
Toshiba-made Semiconductors are among the Worldwide Top 20 Semiconductor Sales Leaders. Toshiba is the world's fifth largest personal computer manufacturer, after Hewlett-Packard and Dell of the U.S., Acer of Taiwan and Lenovo of China.[2]
Toshiba, a world leader in high technology, is a diversified manufacturer and marketer of advanced electronic and electrical products, spanning information & communications equipment and systems, Internet-based solutions and services, electronic components and materials, power systems, industrial and social infrastructure systems, and household appliances.
Energy applications of nanotechnology
An important subfield of nanotechnology related to energy is nanofabrication. Nanofabrication is the process of designing and creating devices on the nanoscale. Creating devices smaller than 100 nanometers opens many doors for the development of new ways to capture, store, and transfer energy. The inherent level of control that nanofabrication could give scientists and engineers would be critical in providing the capability of solving many of the problems that the world is facing today related to the current generation of energy technologies.
People in the fields of science and engineering have already begun developing ways of utilizing nanotechnology for the development of consumer products. Benefits already observed from the design of these products are an increased efficiency of lighting and heating, increased electrical storage capacity, and a decrease in the amount of pollution from the use of energy. Benefits such as these make the investment of capital in the research and development of nanotechnology a top priority.
Fuel cells in power generation are currently designed for transportation need rapid start-up periods for the practicality of consumer use. This process puts a lot of strain on the traditional polymer electrolyte membranes, which decreases the life of the membrane requiring frequent replacement. Using nanotechnology, engineers have the ability to create a much more durable polymer membrane, which addresses this problem. Nanoscale polymer membranes are also much more efficient in ionic conductivity. This improves the efficiency of the system and decreases the time between replacements, which lowers costs.
The batteries in the Research for longer lasting batteries has been an ongoing process for years. Researchers have now begun to utilize nanotechnology for battery technology. mPhase Technologies in conglomeration with Rutgers University and Bell Laboratories have utilized nanomaterials to alter the wetting behavior of the surface where the liquid in the battery lies to spread the liquid droplets over a greater area on the surface and therefore have greater control over the movement of the droplets. This gives more control to the designer of the battery. This control prevents reactions in the battery by separating the electrolytic liquid from the anode and the cathode when the battery is not in use and joining them when the battery is in need of use.
Thermal applications also are a future applications of nanothechonlogy creating low cost system of heating, ventilation, and air conditioning, changing molecular structure for better management of temperature
People in the fields of science and engineering have already begun developing ways of utilizing nanotechnology for the development of consumer products. Benefits already observed from the design of these products are an increased efficiency of lighting and heating, increased electrical storage capacity, and a decrease in the amount of pollution from the use of energy. Benefits such as these make the investment of capital in the research and development of nanotechnology a top priority.
Fuel cells in power generation are currently designed for transportation need rapid start-up periods for the practicality of consumer use. This process puts a lot of strain on the traditional polymer electrolyte membranes, which decreases the life of the membrane requiring frequent replacement. Using nanotechnology, engineers have the ability to create a much more durable polymer membrane, which addresses this problem. Nanoscale polymer membranes are also much more efficient in ionic conductivity. This improves the efficiency of the system and decreases the time between replacements, which lowers costs.
The batteries in the Research for longer lasting batteries has been an ongoing process for years. Researchers have now begun to utilize nanotechnology for battery technology. mPhase Technologies in conglomeration with Rutgers University and Bell Laboratories have utilized nanomaterials to alter the wetting behavior of the surface where the liquid in the battery lies to spread the liquid droplets over a greater area on the surface and therefore have greater control over the movement of the droplets. This gives more control to the designer of the battery. This control prevents reactions in the battery by separating the electrolytic liquid from the anode and the cathode when the battery is not in use and joining them when the battery is in need of use.
Thermal applications also are a future applications of nanothechonlogy creating low cost system of heating, ventilation, and air conditioning, changing molecular structure for better management of temperature
Biotechnology is technology based on biology, agriculture, food science, and medicine. Modern use of the term usually refers to genetic engineering as well as cell- and tissue culture technologies. However, the concept encompasses a wider range and history of procedures for modifying living things according to human purposes, going back to domestication of animals, cultivation of plants and "improvements" to these through breeding programs that employ artificial selection and hybridization. By comparison to biotechnology, bioengineering is generally thought of as a related field with its emphasis more on mechanical and higher systems approaches to interfacing with and exploiting living things. United Nations Convention on Biological Diversity defines biotechnology as:[1]
"Any technological application that uses biological systems, dead organisms, or derivatives thereof, to make or modify products or processes for specific use."
Biotechnology draws on the pure biological sciences (genetics, microbiology, animal cell culture, molecular biology, biochemistry, embryology, cell biology) and in many instances is also dependent on knowledge and methods from outside the sphere of biology (chemical engineering, bioprocess engineering, information technology, biorobotics). Conversely, modern biological sciences (including even concepts such as molecular ecology) are intimately entwined and dependent on the methods developed through biotechnology and what is commonly thought of as the life sciences industry.
A series of derived terms have been coined to identify several branches of biotechnology, for example:-bioinformatics
Bioinformatics is an interdisciplinary field which addresses biological problems using computational techniques, and makes the rapid organization and analysis of biological data possible. The field may also be referred to as computational biology, and can be defined as, "conceptualizing biology in terms of molecules and then applying informatics techniques to understand and organize the information associated with these molecules, on a large scale."[6] Bioinformatics plays a key role in various areas, such as functional genomics, structural genomics, and proteomics, and forms a key component in the biotechnology and pharmaceutical sector.
Blue biotechnology is a term that has been used to describe the marine and aquatic applications of biotechnology, but its use is relatively rare.
Green biotechnology is biotechnology applied to agricultural processes. An example would be the selection and domestication of plants via micropropagation. Another example is the designing of transgenic plants to grow under specific environmental in the presence (or absence) of chemicals. One hope is that green biotechnology might produce more environmentally friendly solutions than traditional industrial agriculture. An example of this is the engineering of a plant to express a pesticide, thereby ending the need of external application of pesticides. An example of this would be Bt corn. Whether or not green biotechnology products such as this are ultimately more environmentally friendly is a topic of considerable debate.
Red biotechnology is applied to medical processes. Some examples are the designing of organisms to produce antibiotics, and the engineering of genetic cures through genomic manipulation.
White biotechnology, also known as industrial biotechnology, is biotechnology applied to industrial processes. An example is the designing of an organism to produce a useful chemical. Another example is the using of enzymes as industrial catalysts to either produce valuable chemicals or destroy hazardous/polluting chemicals. White biotechnology tends to consume less in resources than traditional processes used to produce industrial goods. The investments and economic output of all of these types of applied biotechnologies form what has been described as the bioeconomy.
"Any technological application that uses biological systems, dead organisms, or derivatives thereof, to make or modify products or processes for specific use."
Biotechnology draws on the pure biological sciences (genetics, microbiology, animal cell culture, molecular biology, biochemistry, embryology, cell biology) and in many instances is also dependent on knowledge and methods from outside the sphere of biology (chemical engineering, bioprocess engineering, information technology, biorobotics). Conversely, modern biological sciences (including even concepts such as molecular ecology) are intimately entwined and dependent on the methods developed through biotechnology and what is commonly thought of as the life sciences industry.
A series of derived terms have been coined to identify several branches of biotechnology, for example:-bioinformatics
Bioinformatics is an interdisciplinary field which addresses biological problems using computational techniques, and makes the rapid organization and analysis of biological data possible. The field may also be referred to as computational biology, and can be defined as, "conceptualizing biology in terms of molecules and then applying informatics techniques to understand and organize the information associated with these molecules, on a large scale."[6] Bioinformatics plays a key role in various areas, such as functional genomics, structural genomics, and proteomics, and forms a key component in the biotechnology and pharmaceutical sector.
Blue biotechnology is a term that has been used to describe the marine and aquatic applications of biotechnology, but its use is relatively rare.
Green biotechnology is biotechnology applied to agricultural processes. An example would be the selection and domestication of plants via micropropagation. Another example is the designing of transgenic plants to grow under specific environmental in the presence (or absence) of chemicals. One hope is that green biotechnology might produce more environmentally friendly solutions than traditional industrial agriculture. An example of this is the engineering of a plant to express a pesticide, thereby ending the need of external application of pesticides. An example of this would be Bt corn. Whether or not green biotechnology products such as this are ultimately more environmentally friendly is a topic of considerable debate.
Red biotechnology is applied to medical processes. Some examples are the designing of organisms to produce antibiotics, and the engineering of genetic cures through genomic manipulation.
White biotechnology, also known as industrial biotechnology, is biotechnology applied to industrial processes. An example is the designing of an organism to produce a useful chemical. Another example is the using of enzymes as industrial catalysts to either produce valuable chemicals or destroy hazardous/polluting chemicals. White biotechnology tends to consume less in resources than traditional processes used to produce industrial goods. The investments and economic output of all of these types of applied biotechnologies form what has been described as the bioeconomy.
Nanotechnology in artificial intelligence
Educating the public about nanotechnology and other complex but emerging technologies causes people to become more “worried and cautious” about the new technologies’ prospective benefits, according to a recent study by researchers at North Carolina State University.
The nano medicine may helpm any people believe that informed citizen input should influence public policies about modern science and technology, but several prominent academics warn against relying on citizen deliberations to promote public engagement in policy-making. These scholars contend that citizens do not enjoy the process of deliberating and individual and collective opinions developed during group deliberation are often worse than if deliberation had never taken place. Following the Danish practice known as “Consensus Conferences,” we tested this skeptical perspective about citizen capacities by holding Citizen Technology Forums (CTF) in six cities in the United States throughout March 2008. Volunteer participants became informed about human enhancement technologies and they generated written reports about their concerns and recommendations regarding the development trajectory of these technologies. We find that participants dramatically increased their factual understanding about human enhancement technologies and they reported feeling more internally efficacious and trusting of others after deliberating; however, they also became more wary of the potential risks and benefits of these technologies and more concerned about potential inequities in the distribution of these benefits.
Nanotechnology is a relatively new field of research and scientific development. It has been speculated about for decades and the wonders and advantages of nanotechnology have been extolled by many. But not all.
The scientific community, in its never ending quest for information and knowledge, consistently fails to seriously acknowledge the dangers of "invisible" technology, such as nanotechnology, going haywire. Nothing is ever to go wrong according to them yet it always does somehow.
In this respect, nanotechnology is not different from other new disciplines. We, as humans, don't seem to have the capacity to really learn to understand something before we start to mess with it on a big scale.
And when things do go wrong - just imagine an autodidactic nano-intelligence on the loose - we end up fighting the symptoms, pointing fingers at each other, and deny any or all culpability.
Forethought of possible consequences is usually far from our minds as we are caught up, or pushed by superiors, to make the research investment profitable as soon as possible.
The nano medicine may helpm any people believe that informed citizen input should influence public policies about modern science and technology, but several prominent academics warn against relying on citizen deliberations to promote public engagement in policy-making. These scholars contend that citizens do not enjoy the process of deliberating and individual and collective opinions developed during group deliberation are often worse than if deliberation had never taken place. Following the Danish practice known as “Consensus Conferences,” we tested this skeptical perspective about citizen capacities by holding Citizen Technology Forums (CTF) in six cities in the United States throughout March 2008. Volunteer participants became informed about human enhancement technologies and they generated written reports about their concerns and recommendations regarding the development trajectory of these technologies. We find that participants dramatically increased their factual understanding about human enhancement technologies and they reported feeling more internally efficacious and trusting of others after deliberating; however, they also became more wary of the potential risks and benefits of these technologies and more concerned about potential inequities in the distribution of these benefits.
Nanotechnology is a relatively new field of research and scientific development. It has been speculated about for decades and the wonders and advantages of nanotechnology have been extolled by many. But not all.
The scientific community, in its never ending quest for information and knowledge, consistently fails to seriously acknowledge the dangers of "invisible" technology, such as nanotechnology, going haywire. Nothing is ever to go wrong according to them yet it always does somehow.
In this respect, nanotechnology is not different from other new disciplines. We, as humans, don't seem to have the capacity to really learn to understand something before we start to mess with it on a big scale.
And when things do go wrong - just imagine an autodidactic nano-intelligence on the loose - we end up fighting the symptoms, pointing fingers at each other, and deny any or all culpability.
Forethought of possible consequences is usually far from our minds as we are caught up, or pushed by superiors, to make the research investment profitable as soon as possible.
Nanotech Robotics Items
Rbotics the joint use of nanoelectronics, photolithography, and new biomaterials, can be considered as a possible way to enable the required manufacturing technology towards nanorobots for common medical applications, such as for surgical instrumentation, diagnosis and drug delivery.Indeed, this feasible approach towards manufacturing on nanotechnology is a practice currently in use from the electronics industry.So, practical nanorobots should be integrated as nanoelectronics devices, which will allow tele-operation and advanced capabilities for medical instrumentation.
Construction of Nanotechnology promises futuristic applications such as microscopic robots that assemble other machines or travel inside the body to deliver drugs or do microsurgery.These machines will face some unique physics. At small scales, fluids appear as viscous as molasses, and Brownian motion makes everything incessantly shake. Taking inspiration from the biological motors of living cells, chemists are learning how to power microsize and nanosize machines with catalytic reactions
Nanorobotics is an emerging field that deals with the controlled manipulation of objects with nanometer-scale dimensions. Typically, an atom has a diameter of a few Ångstroms (1 Å = 0.1 nm = 10-10 m), a molecule's size is a few nm, and clusters or nanoparticles formed by hundreds or thousands of atoms have sizes of tens of nm. Therefore, Nanorobotics is concerned with interactions with atomic- and molecular-sized objects-and is sometimes called Molecular Robotics. We use these two expressions, plus Nanomanipulation, as synonyms in this article.
Molecular Robotics falls within the purview of Nanotechnology, which is the study of phenomena and structures with characteristic dimensions in the nanometer range. The birth of Nanotechnology is usually associated with a talk by Nobel-prize winner Richard Feynman entitled "There is plenty of room at the bottom", whose text may be found in [Crandall & Lewis 1992]. Nanotechnology has the potential for major scientific and practical breakthroughs. Future applications ranging from very fast computers to self-replicating robots are described in Drexler's seminal book [
Nanotechnology is being pursued along two converging directions. From the top down, semiconductor fabrication techniques are producing smaller and smaller structures-see e.g. [Colton & Marrian 1995] for recent work. For example, the line width of the original Pentium chip is 350 nm. Current optical lithography techniques have obvious resolution limitations because of the wavelength of visible light, which is in the order of 500 nm. X-ray and electron-beam lithography will push sizes further down, but with a great increase in complexity and cost of fabrication. These top-down techniques do not seem promising for building nanomachines that require precise positioning of atoms or molecules.
Alternatively, one can proceed from the bottom up, by assembling atoms and molecules into functional components and systems. There are two main approaches for building useful devices from nanoscale components. The first is based on self-assembly, and is a natural evolution of traditional chemistry and bulk processing-see e.g. [Gómez-López et al. 1996]. The other is based on controlled positioning of nanoscale objects, direct application of forces, electric fields, and so on. The self-assembly approach is being pursued at many laboratories. Despite all the current activity, self-assembly has severe limitations because the structures produced tend to be highly symmetric, and the most versatile self-assembled systems are organic and therefore generally lack robustness. The second approach involves Nanomanipulation, and is being studied by a small number of researchers, who are focusing on techniques based on Scanning Probe Microscopy (abbreviated SPM, and described later in this article).
A top-down technique that is closely related to Nanomanipulation involves removing or depositing small amounts of material by using an SPM. This approach falls within what is usually called Nanolithography. SPM-based Nanolithography is akin to machining or to rapid prototyping techniques such as stereolithography. For example, one can remove a row or two of hydrogen atoms on a silicon substrate that has been passivated with hydrogen by moving the tip of an SPM in a straight line over the substrate and applying a suitable voltage. The removed atoms are "lost" to the environment, much like metal chips in a machining operation. Lines with widths in the order of 10 to 100 nm have been written by these techniques-see e.g. [Wiesendanger 1994] for a survey of some of this work. In this article we focus on Nanomanipulation proper, which is akin to assembly in the macroworld.
Nanorobotics research has proceeded along two lines. The first is devoted to the design and computational simulation of robots with nanoscale dimensions-see [Drexler 1992] for the design of robots that resemble their macroscopic counterparts. Drexler's nanorobot uses various mechanical components such as nanogears built primarily with carbon atoms in a diamondoid structure. A major issue is how to build these devices, and little experimental progress has been made towards their construction.
The second area of Nanorobotics research involves manipulation of nanoscale objects with macroscopic instruments. Experimental work has been focused on this area, especially through the use of SPMs as robots. The remainder of this article describes SPM principles, surveys SPM use in Nanomanipulation, looks at the SPM as a robot, and concludes with a discussion of some of the challenges that face Nanorobotics research.
Scanning Probe Microscopes
The Scanning Tunelling Microscope (STM) was invented by Binnig and Rohrer at the IBM Zürich laboratory in the early 1980s, and won them a Nobel prize four years later.
Construction of Nanotechnology promises futuristic applications such as microscopic robots that assemble other machines or travel inside the body to deliver drugs or do microsurgery.These machines will face some unique physics. At small scales, fluids appear as viscous as molasses, and Brownian motion makes everything incessantly shake. Taking inspiration from the biological motors of living cells, chemists are learning how to power microsize and nanosize machines with catalytic reactions
Nanorobotics is an emerging field that deals with the controlled manipulation of objects with nanometer-scale dimensions. Typically, an atom has a diameter of a few Ångstroms (1 Å = 0.1 nm = 10-10 m), a molecule's size is a few nm, and clusters or nanoparticles formed by hundreds or thousands of atoms have sizes of tens of nm. Therefore, Nanorobotics is concerned with interactions with atomic- and molecular-sized objects-and is sometimes called Molecular Robotics. We use these two expressions, plus Nanomanipulation, as synonyms in this article.
Molecular Robotics falls within the purview of Nanotechnology, which is the study of phenomena and structures with characteristic dimensions in the nanometer range. The birth of Nanotechnology is usually associated with a talk by Nobel-prize winner Richard Feynman entitled "There is plenty of room at the bottom", whose text may be found in [Crandall & Lewis 1992]. Nanotechnology has the potential for major scientific and practical breakthroughs. Future applications ranging from very fast computers to self-replicating robots are described in Drexler's seminal book [
Nanotechnology is being pursued along two converging directions. From the top down, semiconductor fabrication techniques are producing smaller and smaller structures-see e.g. [Colton & Marrian 1995] for recent work. For example, the line width of the original Pentium chip is 350 nm. Current optical lithography techniques have obvious resolution limitations because of the wavelength of visible light, which is in the order of 500 nm. X-ray and electron-beam lithography will push sizes further down, but with a great increase in complexity and cost of fabrication. These top-down techniques do not seem promising for building nanomachines that require precise positioning of atoms or molecules.
Alternatively, one can proceed from the bottom up, by assembling atoms and molecules into functional components and systems. There are two main approaches for building useful devices from nanoscale components. The first is based on self-assembly, and is a natural evolution of traditional chemistry and bulk processing-see e.g. [Gómez-López et al. 1996]. The other is based on controlled positioning of nanoscale objects, direct application of forces, electric fields, and so on. The self-assembly approach is being pursued at many laboratories. Despite all the current activity, self-assembly has severe limitations because the structures produced tend to be highly symmetric, and the most versatile self-assembled systems are organic and therefore generally lack robustness. The second approach involves Nanomanipulation, and is being studied by a small number of researchers, who are focusing on techniques based on Scanning Probe Microscopy (abbreviated SPM, and described later in this article).
A top-down technique that is closely related to Nanomanipulation involves removing or depositing small amounts of material by using an SPM. This approach falls within what is usually called Nanolithography. SPM-based Nanolithography is akin to machining or to rapid prototyping techniques such as stereolithography. For example, one can remove a row or two of hydrogen atoms on a silicon substrate that has been passivated with hydrogen by moving the tip of an SPM in a straight line over the substrate and applying a suitable voltage. The removed atoms are "lost" to the environment, much like metal chips in a machining operation. Lines with widths in the order of 10 to 100 nm have been written by these techniques-see e.g. [Wiesendanger 1994] for a survey of some of this work. In this article we focus on Nanomanipulation proper, which is akin to assembly in the macroworld.
Nanorobotics research has proceeded along two lines. The first is devoted to the design and computational simulation of robots with nanoscale dimensions-see [Drexler 1992] for the design of robots that resemble their macroscopic counterparts. Drexler's nanorobot uses various mechanical components such as nanogears built primarily with carbon atoms in a diamondoid structure. A major issue is how to build these devices, and little experimental progress has been made towards their construction.
The second area of Nanorobotics research involves manipulation of nanoscale objects with macroscopic instruments. Experimental work has been focused on this area, especially through the use of SPMs as robots. The remainder of this article describes SPM principles, surveys SPM use in Nanomanipulation, looks at the SPM as a robot, and concludes with a discussion of some of the challenges that face Nanorobotics research.
Scanning Probe Microscopes
The Scanning Tunelling Microscope (STM) was invented by Binnig and Rohrer at the IBM Zürich laboratory in the early 1980s, and won them a Nobel prize four years later.
Nanotechnology in aerospace
Aerospace “The importance of the space sector can be emphasized by the number of spacecrafts launched. In the period from 1957 till 2005, 6376 spacecraft have been launched at an average of 133 per year. The has been a decrease in the number of spacecrafts launched in the recent years with 78 launched in 2005. Of the 6378 launches, 56.8% were military spacecrafts and 43.2 were civilian. 245 manned missions have been launched in this period. 1674 communication or weather satellites were also launched. The remaining spacecraft launches has been exploration missions.”
satellites for Nanoscale will increasingly have an impact on numerous commercial, military and space aero-applications. I covered nano military applications in a column last year and will cover that subject again sometime later this year. I would like to review non-military aerospace applications here.
CANEUS is described as the world's foremost international conference on Micro-Nano-Technology (MNT) development for aerospace applications. According to its webpage , the conference deals with the challenges of rapidly and efficiently transitioning aerospace MNT development from a low technology-readiness-level (TRL) to system-level implementations based on an integrated "cradle-to-grave" approach.
The webpage describes CANEUS stakeholders as:
the low-TRL research and development community;
the mid- and high-TRL system developer community;
end-users from the aerospace and defense sectors;
the private investment community, consisting of venture capitalists and investors;
government investors in CANEUS member countries;
government policy makers for cross-border collaborations;and
scientists, engineers, program managers, investors and policy-makers from the U.S., Canada, Europe, and Asia, representing these MNT stakeholder communities.
The conference covers such topics as:
emerging MNT concepts (low TRL);
MNT system development (mid TRL);
mature systems and dub-dystems (high TRL);
end-user needs and perspectives);
investment perspectives and roadmaps; and
governmental policies affecting coordinated, joint international development of aerospace MNT.
Nanowerk reported recently that CANEUS has launched a "pre-seed" fund to provide partial funding for system-level development projects recommended by the CANEUS Board. Contributors gain privileged access to downstream investment opportunities. The fund description is posted on the CANEUS website. A NATO lecture series has been developed on nanotechnology aerospace applications. Interestingly, a paper published in 1999 covered the application of molecular nanotechnology in aerospace.
The Open-Site free internet encyclopedia has a write-up about the purpose, needs, problems and solutions of nanotechnology research for aerospace.
The most complete publicly available report on nanotechnology applications in non-military aerospace was published recently by the Nanoforum. It says this Nanotechnology in Aerospace report “presents a concise introduction and contribution to the expert debate on trends in nanomaterials and nanotechnologies for applications in the civil aeronautics and space sectors in Europe and explicitly excludes any military R&D and applications.”
satellites for Nanoscale will increasingly have an impact on numerous commercial, military and space aero-applications. I covered nano military applications in a column last year and will cover that subject again sometime later this year. I would like to review non-military aerospace applications here.
CANEUS is described as the world's foremost international conference on Micro-Nano-Technology (MNT) development for aerospace applications. According to its webpage , the conference deals with the challenges of rapidly and efficiently transitioning aerospace MNT development from a low technology-readiness-level (TRL) to system-level implementations based on an integrated "cradle-to-grave" approach.
The webpage describes CANEUS stakeholders as:
the low-TRL research and development community;
the mid- and high-TRL system developer community;
end-users from the aerospace and defense sectors;
the private investment community, consisting of venture capitalists and investors;
government investors in CANEUS member countries;
government policy makers for cross-border collaborations;and
scientists, engineers, program managers, investors and policy-makers from the U.S., Canada, Europe, and Asia, representing these MNT stakeholder communities.
The conference covers such topics as:
emerging MNT concepts (low TRL);
MNT system development (mid TRL);
mature systems and dub-dystems (high TRL);
end-user needs and perspectives);
investment perspectives and roadmaps; and
governmental policies affecting coordinated, joint international development of aerospace MNT.
Nanowerk reported recently that CANEUS has launched a "pre-seed" fund to provide partial funding for system-level development projects recommended by the CANEUS Board. Contributors gain privileged access to downstream investment opportunities. The fund description is posted on the CANEUS website. A NATO lecture series has been developed on nanotechnology aerospace applications. Interestingly, a paper published in 1999 covered the application of molecular nanotechnology in aerospace.
The Open-Site free internet encyclopedia has a write-up about the purpose, needs, problems and solutions of nanotechnology research for aerospace.
The most complete publicly available report on nanotechnology applications in non-military aerospace was published recently by the Nanoforum. It says this Nanotechnology in Aerospace report “presents a concise introduction and contribution to the expert debate on trends in nanomaterials and nanotechnologies for applications in the civil aeronautics and space sectors in Europe and explicitly excludes any military R&D and applications.”
computer science to nanotechnology
Smaller, lighter computers and an end to worries about electrical failures sending hours of on-screen work into an inaccessible limbo mark the potential result of Argonne research on tiny ferroelectric crystals.
"Tiny" means billionths of a meter, or about 1/500th the width of a human hair. These nanomaterials behave differently than their larger bulk counterparts. Argonne researchers have learned that they are more chemically reactive, exhibit new electronic properties and can be used to create materials that are stronger, tougher and more resistant to friction and wear than bulk materials.
Improved nano-engineered ferroelectric crystals could realize a 50-year-old dream of creating nonvolatile random access memory (NVRAM). The first fruits of it can be seen in Sony's PlayStation 2 and in smart cards now in use in Brazil, China and Japan. A simple wave of a smart card identifies personnel or pays for gas or public transportation.
Computing applications
RAM – random access memory – is used when someone enters information or gives a command to the computer. It can be written to as well as read but - with standard commercial technology - holds its content only while powered by electricity.
Argonne materials scientists have created and are studying nanoscale crystals of ferroelectric materials that can be altered by an electrical field and retain any changes.
Ferroelectric materials – so called, because they behave similarly to ferromagnetic materials even though they don't generally contain iron – consist of crystals whose low symmetry causes spontaneous electrical polarization along one or more of their axes. The application of voltage can change this polarity. Ferroelectric crystals can also change mechanical to electrical energy– the piezoelectric effect – or electrical energy to optical effects.
A strong external electrical field can reverse the plus and minus poles of ferroelectric polarization. The crystals hold their orientation until forced to change by another applied electric field. Thus, they can be coded as binary memory, representing "zero" in one orientation and "one" in the other.
Because the crystals do not revert spontaneously, RAM made with them would not be erased should there be a power failure. Laptop computers would no longer need back-up batteries, permitting them to be made still smaller and lighter. There would be a similar impact on cell phones.
Achieving such permanence is a long-standing dream of the computer industry.
"Companies such as AT&T, Ford, IBM, RCA and Westinghouse Electric made serious efforts to develop non-volatile RAMs in the 1950s, but couldn't achieve commercial use," said Argonne researcher Orlando Auciello. "Back then, NVRAMs were based on expensive ferroelectric single crystals, which required substantial voltage to switch their polarity. This, and cross talk inherent in the then recently devised row matrix address concept, made them impractical.
"Working on the nanoscale changes this," said Auciello. "It means higher density memories with faster speeds and megabyte (the amount of memory needed to store one million characters of information) - or even gigabyte (one billion bytes) - capacity. It's not clear how soon such capacity will be available, but competition is heavy, stakes are high, and some companies claim they will have the first fruits of this research within two years."
"Tiny" means billionths of a meter, or about 1/500th the width of a human hair. These nanomaterials behave differently than their larger bulk counterparts. Argonne researchers have learned that they are more chemically reactive, exhibit new electronic properties and can be used to create materials that are stronger, tougher and more resistant to friction and wear than bulk materials.
Improved nano-engineered ferroelectric crystals could realize a 50-year-old dream of creating nonvolatile random access memory (NVRAM). The first fruits of it can be seen in Sony's PlayStation 2 and in smart cards now in use in Brazil, China and Japan. A simple wave of a smart card identifies personnel or pays for gas or public transportation.
Computing applications
RAM – random access memory – is used when someone enters information or gives a command to the computer. It can be written to as well as read but - with standard commercial technology - holds its content only while powered by electricity.
Argonne materials scientists have created and are studying nanoscale crystals of ferroelectric materials that can be altered by an electrical field and retain any changes.
Ferroelectric materials – so called, because they behave similarly to ferromagnetic materials even though they don't generally contain iron – consist of crystals whose low symmetry causes spontaneous electrical polarization along one or more of their axes. The application of voltage can change this polarity. Ferroelectric crystals can also change mechanical to electrical energy– the piezoelectric effect – or electrical energy to optical effects.
A strong external electrical field can reverse the plus and minus poles of ferroelectric polarization. The crystals hold their orientation until forced to change by another applied electric field. Thus, they can be coded as binary memory, representing "zero" in one orientation and "one" in the other.
Because the crystals do not revert spontaneously, RAM made with them would not be erased should there be a power failure. Laptop computers would no longer need back-up batteries, permitting them to be made still smaller and lighter. There would be a similar impact on cell phones.
Achieving such permanence is a long-standing dream of the computer industry.
"Companies such as AT&T, Ford, IBM, RCA and Westinghouse Electric made serious efforts to develop non-volatile RAMs in the 1950s, but couldn't achieve commercial use," said Argonne researcher Orlando Auciello. "Back then, NVRAMs were based on expensive ferroelectric single crystals, which required substantial voltage to switch their polarity. This, and cross talk inherent in the then recently devised row matrix address concept, made them impractical.
"Working on the nanoscale changes this," said Auciello. "It means higher density memories with faster speeds and megabyte (the amount of memory needed to store one million characters of information) - or even gigabyte (one billion bytes) - capacity. It's not clear how soon such capacity will be available, but competition is heavy, stakes are high, and some companies claim they will have the first fruits of this research within two years."
Nanotechnology in Medicine
Applications of nanotechnology in medicine currently being developed involve employing nano-particles to deliver drugs, heat, light or other substances to specific cells in the human body. Engineering particles to be used in this way allows detection and/or treatment of diseases or injuries within the targeted cells, thereby minimizing the damage to healthy cells in the body.
The longer range future of nanotechnology in medicine is referred to as nanomedicine. This involves the use of manufactured nano-robots to make repairs at the cellular level.
Nanotechnology in Medicine: Company DirectoryCompany Product
1.CytImmune: Gold nanoparticles for targeted delivery of drugs to tumors
2.Nucryst: Antimicrobial wound dressings using silver nanocrystals
3.Nanobiotix: Nanoparticles that target tumor cells, when irradiated by xrays nanoparticles generate electrons which cause localized destruction of the tumor cel ls.
4.Oxonica: Disease identification using gold nanoparticles (biomarkers)
5.Nanotherapeutics: Nanoparticles for improving the performance of drug delivery by oral, inhaled or nasal methods
6.NanoBio: Nanoemulsions for nasal delivery to fight viruses (such as the flu and colds) and bacteria
7.BioDelivery: Sciences Oral drug delivery of drugs encapuslated in a nanocrystalline structure called a cochleate
8.NanoBioMagnetics: Magnetically responsive nanoparticles for targeted drug delivery and other applications
8.Z-Medica: Medical gauze containing aluminosilicate nanoparticles which help bood clot faster in open wounds.
Nanotechnology is used to create ultra-small polymer particles capable of carrying the drugs into the body. The scientists say the development of the combination drug makes it possible to create a precise feedback system that can safely regulate release of the drugs aboard the nanoparticles
Using human plasma in laboratory tests, one ‘pro drug’ was successfully identified as being able to sense oxygen blood levels and turned on or off as needed.
“When respiratory distress is too severe, that will trigger release of Naloxone, the antagonist (morphine-suppressing) drug. When the oxygen blood levels go up, that will stop the action of the antagonist drug and more morphine will be available,” says Baohua Huang, Ph.D., the study’s first author and a research investigator at the Michigan Nanotechnology Institute and in Internal Medicine.
MNIMBS scientists are conducting further extensive studies before testing on humans proceeds.
The longer range future of nanotechnology in medicine is referred to as nanomedicine. This involves the use of manufactured nano-robots to make repairs at the cellular level.
Nanotechnology in Medicine: Company DirectoryCompany Product
1.CytImmune: Gold nanoparticles for targeted delivery of drugs to tumors
2.Nucryst: Antimicrobial wound dressings using silver nanocrystals
3.Nanobiotix: Nanoparticles that target tumor cells, when irradiated by xrays nanoparticles generate electrons which cause localized destruction of the tumor cel ls.
4.Oxonica: Disease identification using gold nanoparticles (biomarkers)
5.Nanotherapeutics: Nanoparticles for improving the performance of drug delivery by oral, inhaled or nasal methods
6.NanoBio: Nanoemulsions for nasal delivery to fight viruses (such as the flu and colds) and bacteria
7.BioDelivery: Sciences Oral drug delivery of drugs encapuslated in a nanocrystalline structure called a cochleate
8.NanoBioMagnetics: Magnetically responsive nanoparticles for targeted drug delivery and other applications
8.Z-Medica: Medical gauze containing aluminosilicate nanoparticles which help bood clot faster in open wounds.
Nanotechnology is used to create ultra-small polymer particles capable of carrying the drugs into the body. The scientists say the development of the combination drug makes it possible to create a precise feedback system that can safely regulate release of the drugs aboard the nanoparticles
Using human plasma in laboratory tests, one ‘pro drug’ was successfully identified as being able to sense oxygen blood levels and turned on or off as needed.
“When respiratory distress is too severe, that will trigger release of Naloxone, the antagonist (morphine-suppressing) drug. When the oxygen blood levels go up, that will stop the action of the antagonist drug and more morphine will be available,” says Baohua Huang, Ph.D., the study’s first author and a research investigator at the Michigan Nanotechnology Institute and in Internal Medicine.
MNIMBS scientists are conducting further extensive studies before testing on humans proceeds.
silver in nanotechnology
Silver has been used for centuries to prevent and treat a variety of diseases, most notably infections. It has been well documented that silver coins were used in ancient Greece and Rome as a disinfectant for the storage of water and other liquids. (1,2) More recently, NASA still uses silver to maintain water purity on the space shuttle. Silver has extremely potent antimicrobial properties, as only one part per 100 million of elemental silver is an effective antimicrobial in a solution. Free silver ions, or radicals, are known to be the active antimicrobial agent. In order to achieve a bactericidal effect, silver ions must be available in solution at the bacterial surface. Efficacy depends on the aqueous concentration of these ions. Silver ions appear to kill micro-organisms instantly by blocking the respiratory enzyme system (energy production), as well as altering microbe DNA and the cell wall, while having no toxic effect on human cells in vivo.
More recent information has provided, at least a hypotheses as to the mechanism of silver’s pro-healing and anti-inflammatory effects. Initial literature reports on the use of pure silver, mainly in the electro-colloidal form, occurred prior to the 1940’s when pure silver was still being used. After 1940 a host of systemic antibiotics became prevalent, decreasing the use of silver except as a topical agent. During this transition, silver was complexed as a salt (e.g. silver nitrate and silver sulfadiazine) or other compound (e.g. silver protein) to increase the available silver ion concentration. These silver complexes remain a popular topical antimicrobial agent for the care of wounds. Silver itself is considered to be non-toxic to human cells in vivo.(4) The only reported complication is the cosmetic abnormality argyria caused by precipitation of silver salts in the skin and leading to a blue-gray color
Thus, if none of the chemical species produced include silver hydroxide complexes with the general formula Agx(OH)y (charge = x-y), then it is conceivable that uptake could occur through the orthophosphate pathway. Such a scenario would explain the rapid uptake and kill of microorganisms as well as the susceptibility of silver resistant organisms. The presence of Ag0 suggests that there must be clusters present as it is unlikely that a bare atom such as Ag0 could exist on its own. These clusters may exist as uncharged or charged entities, but it is unknown what the biological activity might be. It is known that other heavy metals such as Au and Pt have unique biological properties including anti-inflammatory and apoptosis induction (anti-tumour?) activity. Since these activities have not been observed with Ag+ in the past, but they have been observed with dissolution products of nanocrytalline silver, it is postulated that it is the other species released, including Ag0, that may be partly or wholly responsible for the unusual biological properties.
More recent information has provided, at least a hypotheses as to the mechanism of silver’s pro-healing and anti-inflammatory effects. Initial literature reports on the use of pure silver, mainly in the electro-colloidal form, occurred prior to the 1940’s when pure silver was still being used. After 1940 a host of systemic antibiotics became prevalent, decreasing the use of silver except as a topical agent. During this transition, silver was complexed as a salt (e.g. silver nitrate and silver sulfadiazine) or other compound (e.g. silver protein) to increase the available silver ion concentration. These silver complexes remain a popular topical antimicrobial agent for the care of wounds. Silver itself is considered to be non-toxic to human cells in vivo.(4) The only reported complication is the cosmetic abnormality argyria caused by precipitation of silver salts in the skin and leading to a blue-gray color
Thus, if none of the chemical species produced include silver hydroxide complexes with the general formula Agx(OH)y (charge = x-y), then it is conceivable that uptake could occur through the orthophosphate pathway. Such a scenario would explain the rapid uptake and kill of microorganisms as well as the susceptibility of silver resistant organisms. The presence of Ag0 suggests that there must be clusters present as it is unlikely that a bare atom such as Ag0 could exist on its own. These clusters may exist as uncharged or charged entities, but it is unknown what the biological activity might be. It is known that other heavy metals such as Au and Pt have unique biological properties including anti-inflammatory and apoptosis induction (anti-tumour?) activity. Since these activities have not been observed with Ag+ in the past, but they have been observed with dissolution products of nanocrytalline silver, it is postulated that it is the other species released, including Ag0, that may be partly or wholly responsible for the unusual biological properties.
Nanotechnology in Heart, Lung, Blood, and Sleep Medicine
The National Heart, Lung, and Blood Institute convened a Working Group of investigators on February 28, 2003, in Bethesda, Maryland to review the challenges and opportunities offered by nanotechnology. The Working Group members included engineers, chemists, biologists, and physicians with an interest in applying nanotechnology and nanoscience to problems in heart, lung, blood and sleep medicine. The Working Group participants first reviewed the responses received to a Request for Information. The participants then discussed the scientific opportunities which nanotechnology and nanoscience bring to research and treatment for heart, lung, blood, and sleep diseases, identifying areas of particular promise. Drug delivery and therapeutics, molecular imaging, diagnostics and biosensors, and tissue engineering and biomaterials were thought by Working Group members to be fields where nanotechnology was likely to have an impact in the near future.
The Working Group next addressed perceived needs and barriers hindering the development and application of nanotechnology solutions to disease problems. Since investigators working in heart, lung, blood and sleep research are rarely skilled in the use of nanomaterials and nanotechnologies, while investigators with nanotechnology skills rarely focus on heart, lung, and blood disorders, fostering partnerships between the two communities was recognized as being essential for bringing nanotechnology and nanoscience into the clinical arena. The provision of centralized resources, for example molecular libraries for intra- and extracellular targeting, to provide broad access to resources in a cost-effective way was also discussed.
The Working Group went on to identify specific disease examples where the application of nanotechnology and nanoscience is likely to be of particular benefit in the next five to ten years. Areas recognized as being ready for the application of nanotechnology and nanoscience included; 1) diagnosis and treatment of vulnerable plaque; 2) tissue repair, engineering and remodeling for replacement and repair of blood vessels and heart and lung tissue; 3) diagnosis, treatment and prevention of lung inflammatory diseases; 4) multifunctional devices capable of monitoring the body for the onset of thrombotic or hemorrhagic events, signaling externally and releasing therapeutic drugs; 5) in vivo sensors monitoring patients for sleep apnea.
Finally, the Working Group made recommendations for the Institute on how to support research in this field. The recommendations of the Working Group are to:
Create multidisciplinary research teams capable of developing and applying nanotechnology to heart, lung, blood, and sleep research and medicine; disseminating technology, materials, and resources; and training a new generation of investigators.
Support individual investigators to conduct research on the application of nanotechnology advances to biological and clinical problems.
Foster pilot programs and developmental research to attract new investigators and stimulate creative, high-impact research.
Encourage the small business community to become involved in the development of nanotechnology applications.
The Working Group next addressed perceived needs and barriers hindering the development and application of nanotechnology solutions to disease problems. Since investigators working in heart, lung, blood and sleep research are rarely skilled in the use of nanomaterials and nanotechnologies, while investigators with nanotechnology skills rarely focus on heart, lung, and blood disorders, fostering partnerships between the two communities was recognized as being essential for bringing nanotechnology and nanoscience into the clinical arena. The provision of centralized resources, for example molecular libraries for intra- and extracellular targeting, to provide broad access to resources in a cost-effective way was also discussed.
The Working Group went on to identify specific disease examples where the application of nanotechnology and nanoscience is likely to be of particular benefit in the next five to ten years. Areas recognized as being ready for the application of nanotechnology and nanoscience included; 1) diagnosis and treatment of vulnerable plaque; 2) tissue repair, engineering and remodeling for replacement and repair of blood vessels and heart and lung tissue; 3) diagnosis, treatment and prevention of lung inflammatory diseases; 4) multifunctional devices capable of monitoring the body for the onset of thrombotic or hemorrhagic events, signaling externally and releasing therapeutic drugs; 5) in vivo sensors monitoring patients for sleep apnea.
Finally, the Working Group made recommendations for the Institute on how to support research in this field. The recommendations of the Working Group are to:
Create multidisciplinary research teams capable of developing and applying nanotechnology to heart, lung, blood, and sleep research and medicine; disseminating technology, materials, and resources; and training a new generation of investigators.
Support individual investigators to conduct research on the application of nanotechnology advances to biological and clinical problems.
Foster pilot programs and developmental research to attract new investigators and stimulate creative, high-impact research.
Encourage the small business community to become involved in the development of nanotechnology applications.
nano trend
The cerebral aneurysm in data transmission is done in application layer and it is also broadcast through wireless radio using a pattern and analyzed using spread spectrum. The cerebral aneurysm is effective development in DNA results danger situation in human body.
The effective development DNA nanotechnology is a subfield of nanotechnology which seeks to use the unique molecular recognition properties of DNA and other nucleic acids to create novel, controllable structures out of DNA. The DNA is thus used as a structural material rather than as a carrier of genetic information, making it an example of bionanotechnology. This has possible applications in molecular self-assembly and in DNA computing .
The effective development DNA nanotechnology is a subfield of nanotechnology which seeks to use the unique molecular recognition properties of DNA and other nucleic acids to create novel, controllable structures out of DNA. The DNA is thus used as a structural material rather than as a carrier of genetic information, making it an example of bionanotechnology. This has possible applications in molecular self-assembly and in DNA computing .
nano for computers
A computer is a machine that manipulates data according to a set of instructions.Modern computers based on integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction of the space.[2] Simple computers are small enough to fit into a wristwatch, and can be powered by a watch battery. Personal computers in their various forms are icons of the Information Age and are what most people think of as "computers". The embedded computers found in many devices from MP3 players to fighter aircraft and from toys to industrial robots are however the most numerous.
Animation may that can both help to stop the transmission of diseases (STDs) such as HIV and prevent during pregnancy.Computers using vacuum tubes as their electronic elements were in use throughout the 1950s.
Apple is an American company that makes computer hardware, computer software, and portable devices like mobile telephones and music players. Apple calls its computers Macintoshes or Macs. Their popular line of mobile music players are called iPods and a mobile phone they have released is called the iPhone. Apple sells their products all around the world.One of the most popular products made by Apple is the iPod. All iPods with a screen can play music, display pictures, and play video. There are several different types of iPods.
Apple iPod touch
iPod touch 8 GB - has a touch screen and looks much like the iPhone. It can hold about 2,000 songs.
iPod touch 16 GB - This model can hold about 4,000 songs.
iPod touch 32 GB - This model can hold about 8,000 songs. It first went on sale in February 2008.
In the future (around 2009-2010) they will create an iPod touch 64 GB - This model can hold about 16,000 songs.
Apple Desktops
iMac
Mac Mini (a very tiny, but fully functional computer that does not come with its own monitor, keyboard, or mouse. Used mainly for home and school)
iMac (a computer where everything is built in behind the screen, mainly for home and school)
Mac Pro (a powerful, fast computer that does not come with its own monitor, for professional people)
Animation may that can both help to stop the transmission of diseases (STDs) such as HIV and prevent during pregnancy.Computers using vacuum tubes as their electronic elements were in use throughout the 1950s.
Apple is an American company that makes computer hardware, computer software, and portable devices like mobile telephones and music players. Apple calls its computers Macintoshes or Macs. Their popular line of mobile music players are called iPods and a mobile phone they have released is called the iPhone. Apple sells their products all around the world.One of the most popular products made by Apple is the iPod. All iPods with a screen can play music, display pictures, and play video. There are several different types of iPods.
Apple iPod touch
iPod touch 8 GB - has a touch screen and looks much like the iPhone. It can hold about 2,000 songs.
iPod touch 16 GB - This model can hold about 4,000 songs.
iPod touch 32 GB - This model can hold about 8,000 songs. It first went on sale in February 2008.
In the future (around 2009-2010) they will create an iPod touch 64 GB - This model can hold about 16,000 songs.
Apple Desktops
iMac
Mac Mini (a very tiny, but fully functional computer that does not come with its own monitor, keyboard, or mouse. Used mainly for home and school)
iMac (a computer where everything is built in behind the screen, mainly for home and school)
Mac Pro (a powerful, fast computer that does not come with its own monitor, for professional people)
nanobots
Nanobots the size of living cells swimming around our bodies, doing our bidding to fight disease, make repairs, and augment our abilities? Futurists and sci-fi books have cooked up this fantasy for years, but will it really happen? Sorry to burst your bubble sci-fi fans, but man-made, fantastic voyage-like motorized nanobots swimming through our bodies simply aren’t in our near term future. Luckily there is another way, and believe it or not a company called Dendreon has already done it!
So what’s the catch? Although Dendreon used nanobots to fight prostate cancer, they didn’t make the nanobots themselves! Instead, Dendreon used the masterfully equipped nanobots that already reside in our bodies. That’s right, I am talking about our own immune system. Enlisting the billions of cells of the body’s immune system as an army of specialized nanobots isn’t at all as fascinating as what we see in the movies, but it is every bit as effective and it is available now.
Just a few weeks ago we wrote a story on the breakthrough from Dendreon, but since then there have been some notable developments.
The magic behind Dendreon’s cancer therapy, called Provenge, is that it trains cells from the body’s immune system to identify the unique surface of individual prostate cancer cells anywhere in the body and destroy them. The idea is not a new one, but Dendreon appears to be the first to have succeeded in making it a reality.
At the time of our last story, Dendreon had announced that its phase 3 trial of Provenge had shown significant success in prolonging the life expectancy of prostate cancer patients, but held off on giving any other details until a meeting scheduled for April 28. Dendreon’s decision to hold off on this data until April 28 created a great deal of suspense and uncertainty, and many questioned whether or not the data was going to be as good as investors (and patients!) had hoped. To add further drama to the story, a freakish 50% drop in the stock just hours before Dendreon released its results on April 28th is yet to be explained.
Although Dendreon’s treatment currently focuses on fighting prostate cancer, the company (and its competitors) will be working furiously in the coming years to harness this same technique to fight other diseases.
Man-made nanobots are a cool idea, and their time will come eventually. But in the meantime why reinvent the wheel when the nanobots of our immune system are already sitting there, waiting to take our command? Dendreon shows us that this is indeed a viable (and financially rewarding!) technique, opening the door to an exciting new paradigm in medical treatment.
So what’s the catch? Although Dendreon used nanobots to fight prostate cancer, they didn’t make the nanobots themselves! Instead, Dendreon used the masterfully equipped nanobots that already reside in our bodies. That’s right, I am talking about our own immune system. Enlisting the billions of cells of the body’s immune system as an army of specialized nanobots isn’t at all as fascinating as what we see in the movies, but it is every bit as effective and it is available now.
Just a few weeks ago we wrote a story on the breakthrough from Dendreon, but since then there have been some notable developments.
The magic behind Dendreon’s cancer therapy, called Provenge, is that it trains cells from the body’s immune system to identify the unique surface of individual prostate cancer cells anywhere in the body and destroy them. The idea is not a new one, but Dendreon appears to be the first to have succeeded in making it a reality.
At the time of our last story, Dendreon had announced that its phase 3 trial of Provenge had shown significant success in prolonging the life expectancy of prostate cancer patients, but held off on giving any other details until a meeting scheduled for April 28. Dendreon’s decision to hold off on this data until April 28 created a great deal of suspense and uncertainty, and many questioned whether or not the data was going to be as good as investors (and patients!) had hoped. To add further drama to the story, a freakish 50% drop in the stock just hours before Dendreon released its results on April 28th is yet to be explained.
Although Dendreon’s treatment currently focuses on fighting prostate cancer, the company (and its competitors) will be working furiously in the coming years to harness this same technique to fight other diseases.
Man-made nanobots are a cool idea, and their time will come eventually. But in the meantime why reinvent the wheel when the nanobots of our immune system are already sitting there, waiting to take our command? Dendreon shows us that this is indeed a viable (and financially rewarding!) technique, opening the door to an exciting new paradigm in medical treatment.
trading in nano
The global Trading and recommendations of the foreign exchange market is the biggest market in the world. The 3.2 trillion USD daily turnover dwarfs the combined turnover of all the world's stock and bond markets.
Forex has reasons for the popularity of foreign exchange trading, but among the most important are the leverage available, the high liquidity 24 hours a day and the very low dealing costs associated with trading.
Import and export is a commercial organisations participate purely due to the currency exposures created by their import and export activities, but the main part of the turnover is accounted for by financial institutions. Investing in foreign exchange remains predominantly the domain of the big professional players in the market - funds, banks and brokers. Nevertheless, any investor with the necessary knowledge of the market's functions can benefit from the advantages stated above
International currency is dollar was no longer suitable as the sole international money at a time when it was under severe pressure from increasing US budget and trade deficits
Stock and bond is lack of sustainability in fixed foreign exchange rates gained new relevance with the events in Financial institutions.
Trading foreign exchange is exciting and potentially very profitable, but there are also significant risk factors.you can always discuss the matter in-depth with one of our dealers. They are available 24 hours a day on the Saxo Bank online trading system, SaxoTrader
The combination of our strong emphasis on customer service, our strategy and trading recommendations, our strategic and individual hedging programmes, along with the availability to our clients of the latest news and information builds a strong case for trading an individual account through Saxo Bank.
Forex has reasons for the popularity of foreign exchange trading, but among the most important are the leverage available, the high liquidity 24 hours a day and the very low dealing costs associated with trading.
Import and export is a commercial organisations participate purely due to the currency exposures created by their import and export activities, but the main part of the turnover is accounted for by financial institutions. Investing in foreign exchange remains predominantly the domain of the big professional players in the market - funds, banks and brokers. Nevertheless, any investor with the necessary knowledge of the market's functions can benefit from the advantages stated above
International currency is dollar was no longer suitable as the sole international money at a time when it was under severe pressure from increasing US budget and trade deficits
Stock and bond is lack of sustainability in fixed foreign exchange rates gained new relevance with the events in Financial institutions.
Trading foreign exchange is exciting and potentially very profitable, but there are also significant risk factors.you can always discuss the matter in-depth with one of our dealers. They are available 24 hours a day on the Saxo Bank online trading system, SaxoTrader
The combination of our strong emphasis on customer service, our strategy and trading recommendations, our strategic and individual hedging programmes, along with the availability to our clients of the latest news and information builds a strong case for trading an individual account through Saxo Bank.
Health and environmental concerns
Some of the recently developed nanoparticle products may have unintended consequences. Researchers have discovered that silver nanoparticles used in socks only to reduce foot odor are being released in the wash with possible negative consequences. Silver nanoparticles, which are bacteriostatic, may then destroy beneficial bacteria which are important for breaking down organic matter in waste treatment plants or farms.
A study at the University of Rochester found that when rats breathed in nanoparticles, the particles settled in the brain and lungs, which led to significant increases in biomarkers for inflammation and stress response.
A major study published more recently in Nature Nanotechnology suggests some forms of carbon nanotubes – a poster child for the “nanotechnology revolution” – could be as harmful as asbestos if inhaled in sufficient quantities. Anthony Seaton of the Institute of Occupational Medicine in Edinburgh, Scotland, who contributed to the article on carbon nanotubes said "We know that some of them probably have the potential to cause mesothelioma. So those sorts of materials need to be handled very carefully." In the absence of specific nano-regulation forthcoming from governments, Paull and Lyons (2008) have called for an exclusion of engineered nanoparticles from organic food. A newspaper article reports that workers in a paint factory developed serious lung disease and nanoparticles were found in their lungs.
A study at the University of Rochester found that when rats breathed in nanoparticles, the particles settled in the brain and lungs, which led to significant increases in biomarkers for inflammation and stress response.
A major study published more recently in Nature Nanotechnology suggests some forms of carbon nanotubes – a poster child for the “nanotechnology revolution” – could be as harmful as asbestos if inhaled in sufficient quantities. Anthony Seaton of the Institute of Occupational Medicine in Edinburgh, Scotland, who contributed to the article on carbon nanotubes said "We know that some of them probably have the potential to cause mesothelioma. So those sorts of materials need to be handled very carefully." In the absence of specific nano-regulation forthcoming from governments, Paull and Lyons (2008) have called for an exclusion of engineered nanoparticles from organic food. A newspaper article reports that workers in a paint factory developed serious lung disease and nanoparticles were found in their lungs.
Implications
Due to the far-ranging claims that have been made about potential applications of nanotechnology, a number of serious concerns have been raised about what effects these will have on our society if realized, and what action if any is appropriate to mitigate these risks.
There are possible dangers that arise with the development of nanotechnology. The Center for Responsible Nanotechnology suggests that new developments could result, among other things, in untraceable weapons of mass destruction, networked cameras for use by the government, and weapons developments fast enough to destabilize arms races ("Nanotechnology Basics").
One area of concern is the effect that industrial-scale manufacturing and use of nanomaterials would have on human health and the environment, as suggested by nanotoxicology research. Groups such as the Center for Responsible Nanotechnology have advocated that nanotechnology should be specially regulated by governments for these reasons. Others counter that overregulation would stifle scientific research and the development of innovations which could greatly benefit mankind.
Other experts, including director of the Woodrow Wilson Center's Project on Emerging Nanotechnologies David Rejeski, have testified that successful commercialization depends on adequate oversight, risk research strategy, and public engagement. Berkeley, California is currently the only city in the United States to regulate nanotechnology; Cambridge,Massachusetts in 2008 considered enacting a similar law,but ultimately rejected this.
There are possible dangers that arise with the development of nanotechnology. The Center for Responsible Nanotechnology suggests that new developments could result, among other things, in untraceable weapons of mass destruction, networked cameras for use by the government, and weapons developments fast enough to destabilize arms races ("Nanotechnology Basics").
One area of concern is the effect that industrial-scale manufacturing and use of nanomaterials would have on human health and the environment, as suggested by nanotoxicology research. Groups such as the Center for Responsible Nanotechnology have advocated that nanotechnology should be specially regulated by governments for these reasons. Others counter that overregulation would stifle scientific research and the development of innovations which could greatly benefit mankind.
Other experts, including director of the Woodrow Wilson Center's Project on Emerging Nanotechnologies David Rejeski, have testified that successful commercialization depends on adequate oversight, risk research strategy, and public engagement. Berkeley, California is currently the only city in the United States to regulate nanotechnology; Cambridge,Massachusetts in 2008 considered enacting a similar law,but ultimately rejected this.
Tools and techniques
There are several important modern developments. The atomic force microscope (AFM) and the Scanning Tunneling Microscope (STM) are two early versions of scanning probes that launched nanotechnology. There are other types of scanning probe microscopy, all flowing from the ideas of the scanning confocal microscope developed by Marvin Minsky in 1961 and the scanning acoustic microscope (SAM) developed by Calvin Quate and coworkers in the 1970s, that made it possible to see structures at the nanoscale. The tip of a scanning probe can also be used to manipulate nanostructures (a process called positional assembly). Feature-oriented scanning-positioning methodology suggested by Rostislav Lapshin appears to be a promising way to implement these nanomanipulations in automatic mode. However, this is still a slow process because of low scanning velocity of the microscope. Various techniques of nanolithography such as optical lithography ,X-ray lithography dip pen nanolithography, electron beam lithography or nanoimprint lithography were also developed. Lithography is a top-down fabrication technique where a bulk material is reduced in size to nanoscale pattern.
Speculative
These subfields seek to anticipate what inventions nanotechnology might yield, or attempt to propose an agenda along which inquiry might progress. These often take a big-picture view of nanotechnology, with more emphasis on its societal implications than the details of how such inventions could actually be created.
Molecular nanotechnology is a proposed approach which involves manipulating single molecules in finely controlled, deterministic ways. This is more theoretical than the other subfields and is beyond current capabilities.
Nanorobotics centers on self-sufficient machines of some functionality operating at the nanoscale. There are hopes for applying nanorobots in medicine, but it may not be easy to do such a thing because of several drawbacks of such devices. Nevertheless, progress on innovative materials and methodologies has been demonstrated with some patents granted about new nanomanufacturing devices for future commercial applications, which also progressively helps in the development towards nanorobots with the use of embedded nanobioelectronics concepts.
Programmable matter based on artificial atoms seeks to design materials whose properties can be easily, reversibly and externally controlled.
Due to the popularity and media exposure of the term nanotechnology, the words picotechnology and femtotechnology have been coined in analogy to it, although these are only used rarely and informally.
Molecular nanotechnology is a proposed approach which involves manipulating single molecules in finely controlled, deterministic ways. This is more theoretical than the other subfields and is beyond current capabilities.
Nanorobotics centers on self-sufficient machines of some functionality operating at the nanoscale. There are hopes for applying nanorobots in medicine, but it may not be easy to do such a thing because of several drawbacks of such devices. Nevertheless, progress on innovative materials and methodologies has been demonstrated with some patents granted about new nanomanufacturing devices for future commercial applications, which also progressively helps in the development towards nanorobots with the use of embedded nanobioelectronics concepts.
Programmable matter based on artificial atoms seeks to design materials whose properties can be easily, reversibly and externally controlled.
Due to the popularity and media exposure of the term nanotechnology, the words picotechnology and femtotechnology have been coined in analogy to it, although these are only used rarely and informally.
Top-down approaches
These seek to create smaller devices by using larger ones to direct their assembly.
Many technologies that descended from conventional solid-state silicon methods for fabricating microprocessors are now capable of creating features smaller than 100 nm, falling under the definition of nanotechnology. Giant magnetoresistance-based hard drives already on the market fit this description, as do atomic layer deposition (ALD) techniques. Peter Grünberg and Albert Fert received the Nobel Prize in Physics for their discovery of Giant magnetoresistance and contributions to the field of spintronics in 2007.
Solid-state techniques can also be used to create devices known as nanoelectromechanical systems or NEMS, which are related to microelectromechanical systems or MEMS.
Atomic force microscope tips can be used as a nanoscale "write head" to deposit a chemical upon a surface in a desired pattern in a process called dip pen nanolithography. This fits into the larger subfield of nanolithography.
Focused ion beams can directly remove material, or even deposit material when suitable pre-cursor gasses are applied at the same time. For example, this technique is used routinely to create sub-100 nm sections of material for analysis in Transmission electron microscopy.
Many technologies that descended from conventional solid-state silicon methods for fabricating microprocessors are now capable of creating features smaller than 100 nm, falling under the definition of nanotechnology. Giant magnetoresistance-based hard drives already on the market fit this description, as do atomic layer deposition (ALD) techniques. Peter Grünberg and Albert Fert received the Nobel Prize in Physics for their discovery of Giant magnetoresistance and contributions to the field of spintronics in 2007.
Solid-state techniques can also be used to create devices known as nanoelectromechanical systems or NEMS, which are related to microelectromechanical systems or MEMS.
Atomic force microscope tips can be used as a nanoscale "write head" to deposit a chemical upon a surface in a desired pattern in a process called dip pen nanolithography. This fits into the larger subfield of nanolithography.
Focused ion beams can directly remove material, or even deposit material when suitable pre-cursor gasses are applied at the same time. For example, this technique is used routinely to create sub-100 nm sections of material for analysis in Transmission electron microscopy.
Subscribe to:
Posts (Atom)
Visitors
Advertising my web site free online UseAds.com - Add & submit url & exchange text links + increase traffic & promotion marketing website