Simulation process proceeding in the electrolyzer for fluorine production for computer simulator for operator of technological process

The seventeeth International Scientific and Practical Conference of Students, Postgraduates and Young Scientists Modern

345 6 9MB

English Pages [266]

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Simulation process proceeding in the electrolyzer for fluorine production for computer simulator for operator of technological process

  • Commentary
  • 915435
  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Ministry of Education and Science Public Educational Institution of High Professional Education Tomsk Polytechnic University

Proceedings of the 17th International Scientific and Practical Conference of Students, Post-graduates and Young Scientists

MODERN TECHNIQUE AND TECHNOLOGIES MTT’ 2011

April 18 - 22, 2011 TOMSK, RUSSIA 1

UDK 62.001.001.5 (063) BBK 30.1L.0 S56

Russia, Tomsk, April 18 - 22, 2011 The seventeeth International Scientific and Practical Conference of Students, Postgraduates and Young Scientists “Modern Techniques and Technologies” (MTT’2011), Tomsk, Tomsk Polytechnic University. – Tomsk: TPU Press, 2011.- 266 p.

Editorial board of proceedings of the conference in English: 1. Zolnikova L.M., Academic Secretary of the Conference 2. Sidorova O.V., leading expert of a department SRWM S&YS SA 3. Golubeva K.A., editor

UDK 62.001.001.5 (063) 2

CONFERENCE SCIENTIFIC PROGRAM COMMITTEE V.A. Vlasov

Chairman of Scientific Program Committee, ViceRector on Research, Professor, Tomsk, Russia

L.M. Zolnikova

Academic Secretary of the Conference, TPU, Tomsk, Russia

O.V. Sidorova

Academic Secretary of the Conference, TPU, Tomsk, Russia

A.A. Sivkov

1th Section Chairman, TPU, Tomsk, Russia

S.V. Syvushkin

2th Section Chairman, TPU, Tomsk, Russia

B.B. Moyzes

3th Section Chairman, TPU, Tomsk, Russia

O.P. Muravliov

4th Section Chairman, TPU, Tomsk, Russia

G.S. Evtushenko

5th Section Chairman, TPU, Tomsk, Russia

B.S. Zenin

6th Section Chairman, TPU, Tomsk, Russia

V.A. Rudnitskiy

7th Section Chairman, TPU, Tomsk, Russia

A.P. Potylitsyn

8th Section Chairman, TPU, Tomsk, Russia

V.K. Kuleshov

9th Section Chairman, TPU, Tomsk, Russia

A.S. Zavorin

10th Section Chairman, TPU, Tomsk, Russia

M.S. Kukhta

11th Section Chairman, TPU, Tomsk, Russia

A.A. Gromov

12th Section Chairman, TPU, Tomsk, Russia

A.A. Stepanov

13th Section Chairman, TPU, Tomsk, Russia

3

XVII Modern Technique and Technologies 2011

4

Section I: Power Engineering

Section I

POWER INGENEERING

5

XVII Modern Technique and Technologies 2011

INVESTIGATIONS IN SPHERE OF WIRELESS ELECTRICITY R.S. Gladkikh, P.A. Ilin, I.S. Kovalev Scientific Supervisor: Prof., Dr. phys.-mat. V.F. Myshkin Scientific Advisor: Teach., A.E. Markhinin Tomsk Polytechnic University, Russia, Tomsk, Lenin str., 30, 634050 E-mail: [email protected] Nowadays it is impossible to imagine the life of a modern person without the electric devices, each of them requires charging. Numerous wires fill our space. In the remote villages there is no possibility to use electricity, because carrying out of electric systems demands big material inputs. New technologies such as Bluetooth and WI-FI allow to carry out an information transfer without any wires and direct connection of devices. And why not it is impossible to make the same with electricity? At the end of the 18th century the well-known Serbian inventor Nikola Tesla started working on this question. For this purpose at the beginning of 1900th he constructed a powerful installation for transfer of high-frequency electric power without wires on considerable distances and this project (the project «Wordenkliff ») was sponsored with money of Dzh. P.Morgan, a billionaire from New York. He wished to provide with the electric power the population of the most distant places of the globe. In addition it was supposed to transfer information. Before Tesla conducted experiments with the big high-voltage resonant transformer made in 1898 in a wooden tower in height of 60 meters on a raised plateau to Colorado Springs (USA). Eyewitnesses told then about the bulbs burning without electro batteries and generators of a current, and about many other things «miracles». Firm «Westinghouse» became interested in the project «Wordenkliff» and put the best (for those times) the electro technical equipment for experiments. However by 1906 financing was stopped.[1] Besides, there were attempts of transmission of energy by means of a laser beam. However in this case between subjects there should be no physical obstacles - that does this theory not applicable under house conditions.[2] In 1943 the Soviet electrical engineer G.Babat constructed the first-ever electric car fed from the distance which was named by «High Frequencesauto». Next year on one of the Soviet factories electro penalties with the engine capacity about 2 kW has been placed in operation. It moved on asphalt paths along which under the earth copper tubes of small diameter were laid. Through them an alternating current frequency of 50 Hz passed. The effective radius of action of these wires was equal to 2-3 m in each side. The first steps have been made, but, unfortunately, losses of electric energy were great: on each square meter of a line 1 kW of capacity was lost, and for a drive just 4 %

6

of energy was used only, and the other 96 % was lost irrevocably. With further investigations scientists tried to increase frequency of the feeding current, but unsuccessfully. At last, it was revealed, that the greatest losses arise because of the underground vertical currents raised HF by a field. But losses on radiation, and small coefficient of efficiency still appeared. After long research at the end of 1947 the experimental line where on each square meter of a surface 10 W of electric capacity was consumed was constructed in Moscow. Wires from thin-walled copper or aluminum tubes were laid in isolated channels or in asbesto-cement pipes. Electro penalties were modified too - all metal parts were removed from it if it was possible. In 1954, in the USSR some lines of the sailing transport charged from coast with high frequency energy were launched. But all designed devices didn’t not find applications because of the big losses and small coefficient of efficiency. But progress doesn’t stand on one place. Recently American scientists successfully tested the device, allowing to transfer of energy without wires. Experts of the Massachusetts institute of technology managed to light a 60-W bulb, being at a 2 m distance from the energy source. The experimental device consists of two coils in diameter of 60 cm with a copper wire, the transmitter connected to the energy source, and the receiver which is connected with the bulb. The lamp continued to light, even when there were wooden or metal subjects, and also electronic devices between it and coils. Coefficient of efficiency of energy transmissions thus made about 40 %. In the device which was named «WiTricity», the phenomenon of a resonance of electromagnetic waves of low frequency (in this case 10 MHz) is used. In particular, WiTricity is based on using 'strongly-coupled' resonances to achieve a high power-transmission efficiency. Aristeidis Karalis, referring to the team's experimental demonstration, says that "the usual non-resonant magnetic induction would be almost 1 million times less efficient in this particular system". The researchers suggest that the exposure levels will be below the threshold for FCC safety regulations, and the radiated-power levels will also comply with the FCC radio interference regulations.“Now our problem is to reduce the sizes of our prototype, to increase distance to which the electric power is transferred, and to

Section I: Power Engineering improve transfer effectiveness ratio, professor Marin Soljachich, the head of group of the scientists working over the invention speaks said.[3] The Russian scientists also work on this subject. The new device designed by inventors consists of a parabolic mirror. In the mirror’s focuse high-voltage electric rated dischargers is situated on the circle scheme, each of them is connected to the high-voltage condenser. In a device operating time there are oscillatory, consisting of rated dischargers and condensers, the contours which are letting out electromagnetic waves of short length. If in mirror’s focus big quantity of such contours is arranged, the offered device can send in space electromagnetic radiation of the big capacity in a pulse mode. This clot of energy moves in space with a velocity of light – almost instantly. Results of its movement can be various and very important. The idea of reflection of an electromagnetic clot of energy from an ionosphere and its movement back to the Earth allows to realize this idea. If the receiver in the form of the oscillatory contour which is adjusted for certain length of a wave is placed on clot’s way electromagnetic energy of a clot will be transformed to a high-frequency alternating current which can be transformed to an alternating current of industrial frequency or a direct current of the necessary voltage. If impulses of a transmission of energy go one behind another through short time intervals it will be possible to transfer the big capacities of the electric power without wires to huge distances not only on the Earth, but also in space: on satellites and from satellites, on the Moon and back.[4] So, it is planned to make the first in history scale experiment on receiving of the transformed solar energy from the satellite. The state Palau is located in the Pacific ocean with population of about twenty thousand persons. At the conference of the United Nations which took place in Indonesia and concerned the problems of climate representatives of Palau agreed to co-operate with the USA in experiment using of a solar energy as an ecologically pure fuel. America suggests Palau to place on one of the desert islands (island Helen) accepting aerial with the built in rectifier (so-called rectifying antenna) with the diameter of about 80 meters. The satellite, that revolve on low orbit (less than 500 kilometers), will transfer energy in the form of microwaves to the rectifying antenna which will be transformed into direct current. It is expected, that capacity of system will reach one megawatt. It is enough to provide one thousand houses with energy, but the primary goal of experiment is to confirm safety of such a method. But research is being conducted not just for desert territories. So, wireless transfer of electric power has been tested successfully on experimental installation, and now there is a construction of its full-size variant for supply with

electricity of the remote village on island Rejunion operated by Frenchmen in the Indian ocean. This village becomes the first-ever community using microwave technology of power supply. The village is located at the bottom of a canyon kilometerdeep, and it was impossible to supply it with electricity by wires. Its inhabitants should use solar batteries constructed on roofs of houses, but it costs too much, and the place on roofs is not sufficient. Microwave systems stand more cheaply than solar batteries and diesel generators together, but also they do not require masts for a suspension bracket of wires which quite often cause protests of supporters of environment protection. As representatives of the French space research agency CNES which developed new technology, electro supply by means of usual networks effectively enough in the centre of their arrangement speak, but expenses increase very quickly in process of enlargement of distance to the consumer. Therefore the microwave technology can appear favorable and in accessible areas. The agency intends to begin trial transfer of the electric power on island by means of microwaves in 10 months, and the plant will be placed in operation in three years. [4] So, recent experiment of IBM was finished with success too. The result is transfer of electric power to hundreds W without wires with efficiency more than 80 % on distance up to 1 meter. This result is serious enough in development of this area. The company wants to carry out further researches in this direction. Scientists are going to reduce the sizes of installation-energy source, to increase volume of transferred energy and to raise efficiency of this system. Nowadays electric systems are the most favorable and popular kind of transfer of the electric power. But the experiments which are passing all over the world, will probably prove safety, convenience and rationality of a transmission of energy without wires. Then the mankind can get rid of wires both in industrial branches, and in everyday life. It will provide electricity to the areas which are located far from civilization. Also we can receive the energy which has been saved up on solar batteries in space. The alternative method of an energy transmission will promote technological progress.

1. 2. 3.

4.

References N. Tesla «Tesla about electricity», autobiography, Minsk, 1970 R.V. Pol’ «Doctrine about electricity», PolScience, Warsaw, 1975. Wireless transfer of electricity/ 4(31)/2009, scientific magazine «PRO electricity», Moscow. Innovations in electricity/ 06.1995, scientific magazine «Electricity», Moscow.

7

XVII Modern Technique and Technologies 2011

RECONSTRUCTION AND VISUALISATION OF LIMITER BOUNDARY FOR KTM TOKAMAK Malakhov A.A Supervisor: Pavlov V.M., Assoc., PhD Tomsk Polytechnic University, 30, Lenin Avenue, Tomsk, 634050, Russia E-mail: [email protected] 1. Introduction A tokamak is a device using a magnetic field to confine plasma in the shape of a torus (doughnut). Achieving stable plasma equilibrium requires magnetic field lines that move around the torus in a helical shape. Such a helical field can be generated by adding a toroidal field (traveling around the torus in circles) and a poloidal field (traveling in circles orthogonal to the toroidal field). In a tokamak, the toroidal field is produced by electromagnets that surround the torus, and the poloidal field is the result of a toroidal electric current that flows inside the plasma. This current is induced inside the plasma with a second set of electromagnets. (Figure 1) The tokamak is one of several types of magnetic confinement devices, and is one of the most-researched candidates for producing controlled thermonuclear fusion power. Magnetic fields are used for confinement since no solid material could withstand the extremely high temperature of the plasma. An alternative to the tokamak is the stellarator. The Tokamak is not as well known as the common "uranium core reactor" or UCR.

relevant shape parameters (like elongation and triangularity) are determined. 2. Equilibrium reconstruction technique To achieve high reconstruction quality, many efficient methods and numerical codes for magnetic analysis have been developed. Among them, the fixed filament current approximation method is the frequently used one. Modification of this algorithm using gradient descent method is proposed in [3]. But the task to find the form and position of plasma has no unambiguous decision. That is for the same indications of sensors probably to construct some models of the form of a plasma cord, each of which will satisfy in turn to conditions. To solve this problem it is necessary to construct a plasma boundary taking into account that it must be restricted by the delimiter. It will give more exact information to engineers supervising process. The application of the presented project allows calculating the plasma parameters at a given time. The primary goal is adding corrections and additions in the main program. In particular libraries with functions allowing to construct limiter plasma boundary both ready functions implemented in the main application and new methods are used. The application skeleton diagram is represented in the Figure 2 Main programm Plasma parameters “calculator” Reconstruction

Libraries with functions

DATA files

Draw functions

Field functoins

AVI functions Visualisation

Figure 1 - Reactor structure Development of methods, algorithms and the software for recovery of a magnetic surface of plasma with usage of exterior magnetic measurements it is necessary for control of position and the plasma form in a real-time mode, and for the decision of others physical diagnostics and the analysis in an interval between discharges. The magnetic topology is first derived using the magnetic measurements, from which the shape and position of the last closed magnetic flux surface (LCFS) and the radial dependence of the

8

BOUNDARY

file

Figure 2 - The program simplified diagram 3. Methods In the main program there is a function which calculates components of a vector of a magnetic induction and a magnetic flux in any point of the camera. This function will be at the heart of algorithm of calculation of plasma boundary. Various methods of calculation of this task have

Section I: Power Engineering been found in a work progress. They differed from each other in the speed of performance (that is important for plasma control in a real-time mode), and in the error of calculations. The first obvious and most simple method is represented schematically in Figure 3.

Figure 3 – First method This method is based on that all boundary is represented in the form of set of points. Each point settles up from previous by an increment to it of a vector depending on vector B. Theoretically the error approaches to zero in case of step reduction. The reference point is a point of a contact of a diaphragm and plasma. In order to increase the speed of the subroutine the first method has been upgraded. In each point of boundary its local radius is figured out from the previous points. It allows notifying boundary offset, to modify coordinates of points. The refined metod is in the Figure 4.

Figure 4 – The refined method 4. Benchmarking and testing In the course of the subroutine test it has been clarified that the first method is insufficiently exact. The first and last points didn't coincide with the big error. (Fugure 5)

Reduction of a step doesn't give the positive result as the error on each step is added. Also runtime is much more admissible (3 msec) And the second method successfully calculates boundary during rather small time. 5. Conclusion and future developments An application in MS Visual C++ 2005 was designed using methods and algorithms which were described before. Results: 1. Visualization possibility of limiter plasma boundaries, and also saving of the data in a separate file is implemented 2. Files of input and output of the data are probably to change 3. Numerical experiments have been made. On the average time of calculation of boundary of plasma is less 1мс Prospects: 1. Various other physical processes which flow in the camera, should be considered 2. The maximum code optimization of the program 3. Check for the speed and insistence of computing resources 4. Program check immediately in actual practice on КТМ the TOKAMAK 6. References [1] E.A.Azizov, KTM project (Kazakhstan Tokamak for Material Testing), Moscow, 2000; [2] L. Landau, E. Lifshitz, Course of Theoretical Physics, vol 8, Electrodynamics of Continuous Media 2nd ed. Pergamon Press, 1984; [3] Q. Jinping, Equilibrium Reconstruction in EAST Tokamak, Plasma Science and Technology, Vol.11, No.2, Apr. 2009. [4] W. Zwingmann, Equilibrium analysis of steady state tokamak discharges, Nucl. Fusion 43 842, 2003; [5] O. Barana, Real-time determination of internal inductance and magnetic axis radial position in JET, Plasma Phys. Control. Fusion 44, 2002; [6] L. Zabeo, A versatile method for the real time determination of the safety factor and density profiles in JET, Plasma Phys. Control. Fusion 44, 2002. [7] Raeder, J.; et al (1986). Controlled Nuclear Fusion. John Wiley & Sons

Figure 5 – Boundary counted on the first method

9

XVII Modern Technique and Technologies 2011

MAGNETIC GENERATOR THE BEST SOLUTION FOR FREE POWER Morozov A.L., Skryl A.A. Supervisor: Professor Kachin S. I. Language Advisor, Senior Instructor: Sokolova E.Y. Tomsk Polytechnic University, 634050, Russia, Tomsk, St. Lenina 30 E-mail: [email protected] Mankind had been concerned for many years and is still concerned with the problem to find the best, the most economical and as a consequence the most environmental friendly source of energy. That is why the interest in the topic of “perpetual motion” by the international community remains a huge and growing, as the needs of civilization in energy and in connection with the imminent exhaustion of the organic non-renewable fuels, and especially with the advent of the global energy and environmental crisis of civilization. When building a society of the future it is important to develop new energy sources that can provide our needs. Nowadays the issue of finding reliable, efficient, clean, and renewable source of energy is extremely vital for many countries including Russia despite the fact that our country is rich in different fossil fuels and has sufficient gas and oil reserves to supply national industry and even other countries for some time to come. However, it is necessary to bear in mind that most of these reserves will be used up over next some years. In spite of the fact that our country is in the happy position of having huge quantities of gas and oil underground we need to find new innovative ways to tackle the issue of finding secure and reliable source of energy for the long term. In the future reconstruction of the country and the coming energy crisis, new sources of energy, based on breakthrough technologies will be absolutely necessary. [1,2] The intention of this article is to present the concept of high efficient, reliable and costless power that can be produced by a magnetic generator. This magnetic generator was designed to meet the requirements to generate free power. The trials showed high efficiency, durability and superior performance of this machine. In order to show the advantages of the proposed system the following tasks should be fulfilled: • To analyze the conventional technology • To choose the option offering the best characteristics • To compare the conventional technology with the designed one. A conventional magnetic generator is a piece of equipment that uses the properties of electromagnetism to generate electricity without needing an external fuel source that is crucial nowadays. The basic structure of a magnetic generator is fairly simple. Firstly, wheel is needed to rotate around an axis. This wheel rotating

10

around the axis functions as a flywheel. This flywheel should be aligned with magnets. All these magnets are of the same polarity. The flywheel needs to be installed inside a stationary wheel. The inner surface of the stationary wheel should be aligned with magnets of the opposite polarity. Now it is necessary to show the technical features of the designed magnetic generator. The construction of this generator is presented below in Fig.1 and Fig.2. This magnetic generator consists of a stator (fixed part), that includes a wheel with some magnets with windings where the voltage is collected. A rotor is installed inside a stator to intensify the magnetic field. The rotor which is installed inside the stator consists of a magnet. The difference of the proposed machine is in a rotor construction. To enable the machine to operate, simply give the flywheel a spin. The opposite magnets are attracted to each other causing the flywheel to rotate faster. Electricity will be generated as the speed of the flywheel increases. The main issue to be solved was to adjust the frequency of this machine in order to obtain desired industrial frequency of 50 Hz that is appropriate for almost all tools, mechanism and equipment that are employed. We suggest two possible ways to solve this problem. The first introduced method that helps to overcome this problem is to vary the frequency by changing the number of poles. Increasing the number of poles leads to increase of frequency level. The second proposed method is to adjust the gap between the rotor and the stator, that in turns causes the change of frequency level. To decrease the effect of electromagnetic waves and reduce the rotation frequency of this machine the stator shielding can be used as a brake. As a result the magnets will not be demagnetized. Trials so far suggest the new design is exceptionally durable and compared with the old version is highly efficient and runs extremely smoothly. This new technology can be used either in industry or for residential needs. Comparing magnetic generator for example with a diesel generator we can show the advantages of this magnetic generator. Firstly, this magnetic generator does not need any fuel to work. As a diesel generator burns diesel to operate, the magnetic one uses only the power of magnetic flux from a permanent magnet. Secondly, taking into consideration this advantage we can

Section I: Power Engineering understand that the magnetic generator is environmentally friendly. Thirdly, using this generator you become independent from any external power. Thus, you can forget about bills for electricity. Fourthly, a diesel generator is equipped with combustion chambers, that cause noise and vibration. The magnetic generator does not have any combustion chambers, as a result any noise and vibrations are completely eliminated. We have come up with completely unique profile of the given machine, that enables to

enhance the technology and system operating costs, compared with the existing and conventional magnetic generator. Some special features such as some changes in construction and frequency adjustment have been added that differentiate the proposed technology from conventional systems. The new source of energy proposed is first of all practical and cheap, secondly economical to set up and maintain, highly efficient and finally kind to our planet. [3]

1 2 3 4 5

N

N

S

S

N

S

S

N

Figure 1. Main view

Figure 2. Top view

References

2. www.33energy.com/free/ 3. www.eco20-20.com/Magnetic-MotorGenerator.html

1. www.bukisa.com/articles/241506_anintroduction-to-how-magnetic-energy-isthe-new-residential-alternative-to-producefree-electricity

ASYNCHRONOUS MODE OF SYNCHRONOUS GENERATOR Feodorova Ye. A. Scientific advisor: Kolomiec N.V., Ph.D., docent Linguistic Advisor: Korobov A.V. Tomsk Polytechnic University, 634050, Russia, Tomsk, Lenin str., 30 E-mail: [email protected] The aim of this article is to give an overview of asynchronous mode of generator, its consequences, and ways of protection. Analysis of reasons of asynchronous mode occurrences, parameters of the power system and additional measures to be taken for such mode elimination will help to prevent significant system faults that in a turn can cause an avalanche-like switching off of consumers, bringing not just damage to the equipment, but also result in unforeseen expenses. The term, asynchronous mode, means shorttime work of power system with nonsynchronous work of one or several generators, which is caused by loss of excitation or instability. In asynchronous mode of excited synchronous generator the phase shift between the EMD’s vectors of generator and voltage vector of the system changes constantly. In this situation synchronous machines work either

in generator mode, or in traction mode, and this process is accompanied by high values of compensating current, big deviation in voltage and also by high values of torque, which, in its turn, affects generator and turbine. The stable asynchronous mode is possible after excitation loss by the generator [1]. Asynchronous mode, caused by loss of excitation is a commonly encountered problem therefore it should be considered in details. It should be noted, that reasons of excitation loss are various. It can be caused by fault in exciting circuit, or trip of secondary protection and control circuits, it also can happen due to mistakes made by operation staff. Based on conditions in a certain power system, generator operated in asynchronous mode with loss of excitation, can return into parallel work with

11

XVII Modern Technique and Technologies 2011 power system, with help of special measures such as emergency automation, or shift into stable asynchronous mode. This mode is characterized by consuming of reactive power from a power system by generator for getting its excitation, and still producing a certain amount of active power to system. Volume of generated active power in stable asynchronous mode should be reduced compared to normal (previous) mode. At the same time braking synchronous torque reduces to zero, generator’s rotation speed increases and this leads to a slip (0.3-0.7%). Stable asynchronous mode is acceptable for short-time period of work both for generator and power system. “Rules of technical operation of power plants and networks in RF” allow operation of generators in asynchronous mode without excitation for a certain period of time. For turbo generators with rated power 63-500 MW shortterm operation (not exceeding 15minutes) in asynchronous modes is allowed. In this case load shouldn`t exceed 45-50% from the rated [2]. As in asynchronous mode generator consumes sufficiently large amount of reactive power, the rated power of electrical system should be sufficient for maintaining of voltage at the busbar of adjacent connections at a level not less than 70% of the rated voltage, to ensure stable operation of generators working in parallel. In whole, long-term operation of turbo generator in asynchronous mode is restricted due to the following reasons: - Increase of stator current by means of significant increase of reactive component of current. - Losses from whirling currents in rotor body. - Increased losses of current in stator. - Shortage of reactive power in power system. Use of asynchronous mode with subsequent resynchronization of generator (after restoration of excitation) allows keeping it in operation. But in certain cases asynchronous mode can be restricted because of shortage of reactive power in a particular network section. In this case generator must be stopped immediately. For hydraulic turbine generator the stable asynchronous mode is strictly forbidden. This restriction is easy to explain with help of diagram 1. The curve of hydraulic turbine generator with damper windings (3) reaches rather high values of torque with big value of slip (3-5%); in case of turbo generator the curve (1) reaches high values of torque with insignificant values of slip, which is acceptable for stable asynchronous mode. Curve of torque of hydraulic turbine generator without damper windings (2) does not even achieve the rated torque; in this case stable asynchronous mode is impossible.

12

Fig. 1. Diagrams of asynchronous torque: 1 – Curve of turbo generator; 2- Curve of hydraulic turbine generator with damper windings; 3 – Curve of hydraulic turbine generator without damper windings. But modern design of powerful hydro generators with damping system allows operation of such generators in stable asynchronous mode. Torque curve of hydro generator, in this case, is similar to torque curve of turbo generator. Load should be decreased to the value not exceeding 30% of the rated [1]. Relay protection should detect an asynchronous mode regardless of the reasons for its occurrence. But operation of relay protection must be selective. There are several methods of asynchronous mode detecting. The most informative one is analyzing of angle between the EMD vectors of generator and voltage vector of the system. But this method doesn’t explain reasons of asynchronous mode occurrence. In a complex power system it is not always possible to get a value of voltage vector of the system due to several reasons. For reliable detection of asynchronous mode occurrence a fact of current increasing in asynchronous mode and periodic variation of the rms current are used as a function of the angle. Also a sensitive element of relay protection is supplemented by a directional power relay, for operating only with a certain value of angle. Combination of both these factors allows selective detection of asynchronous mode occurrence and operation of relay protection even for the first rotation of EMD vectors [3]. Another reliable method of detecting of asynchronous mode occurrence is based on measuring of generator resistance change. Such type of relay protection is called protection against loss of excitation. During a long period of time this type of relay protection was implemented in electromechanical relays with round or in some cases elliptic characteristic of operation, which is located in the third or fourth quadrant. This happens due to the fact that in normal mode characteristic of the resistance of generator is located in the first or second quadrant. And just after loss of excitation generator begins consumption of reactive power from the power

Section I: Power Engineering system, and as result vector of generator resistance shifts into the third or fourth quadrant. But round and elliptic shape of relays characteristics is not always enough to protect generator against loss of excitation, which represents one of the reasons of asynchronous mode occurrence. Nowadays relay protection against loss of excitation is based on microprocessor technology, as an element of relay protection for generator. It lets one to program different shapes of resistance characteristics, thus improving reliability of relay operation. Set of relays for generator also includes automated elimination of asynchronous operation. Such type of automation must detect a fact of asynchronous mode occurrence and generate a signal for disconnection of the part of power system with nonsynchronous work of generators. This type of automation is called “disconnection” automation. Today such sets of protection relays are manufactured by both Russian and foreign companies, for example: EKRA, Ltd, STC “Mechanotronica”, AREVA T&D, Siemens, VAMP and etc [4]. 5.

This article shows, that special set of additional measures, relay protection and automation, that are widely represented in the electro-technical market, can prevent negative consequences of asynchronous mode, such as avalanche-like switching off of consumers, and massive system faults. References: 1. Pavlov G.M., Merkuriev G.V. Automatics of power systems. Training Center of RAO UES of Russia, 2001.-387 p. with fig. 2. Rules of technical operation of power plants and networks in RF. Ministry of Energy. –M.: CC “Energoservice”, 2003- 368 p. 3. Vavin V.N. Relay protection of block of turbogenerator-transformer. – M.: Energoizdat, 1982.-256 p. with fig. 4. [Electronic resource]. - Mode of access: http://www.ekra.ru/production/gen/sub _rza_stancionnogo_oborudovaniya/

13

XVII Modern Technique and Technologies 2011

14

Section II: Instrument Making

Section II

INSTRUMENT MAKING

15

XVII Modern Technique and Technologies 2011

A NEW TYPE OF TORQUE MOTOR WITH PACK OF PLATES Ivanova A.G. Scientific adviser: Martemjanov V.M., candidate of science, associate professor Linguistic adviser: Kozlova N.I. Tomsk polytechnic university, 634050, g. Tomsk, pr. Lenina, 30 E-mail: [email protected] Conventional direct current (DC) motors are highly efficient and their characteristics make them suitable for use as servomotors. However, their only drawback is that they need a commutator and brushes which are subject to wear and require maintenance. When the functions of commutator and brushes were implemented by solid-state switches, maintenance-free motors were realized. These motors are now known as brushless DC motors. Brushless motor, working as an executive device of automatics reductorfree systems is called a torque motor. The motor construction is illustrated in picture 1. There are polyphase stator windings, a permanent magnet rotor, a position pole sensor (usually Hall elements) and an electronic commutator, which is not shown in the picture 1 [1].

Pic.1. Disassembled view of a brushless dc motor For increasing the torque of the brushless torque motor it is necessary to increase the current, flowing in the stator winding. This may cause overheating and the destruction of stator winding, having bad heat rejectionbecause of itsdesign philosophy. In most brushless motors there is slot insulation of conductors inferromagnetic stator core or a stator core made of dielectric materials. To solve this problem, a new type of a torque motor is worked out. A big shaft torque is formed due to the current consumption increase and the active part of the motor is not overheated. The active element of the brushless torque motor is a laminated structure. A part of this structure is a pack of plates. To explain the operation principle of the executive device with a pack of plates, let’s consider a homogeneous rectangular, electrically

16

conducting plate made of copper or aluminum [2]. The points of connection to the electrical circuit are at its diagonally situated corners (pic.2).

Pic.2. Electrically conducting plate It can be affirmed that separate currents Ii, composing the current distributed over the plate, have two components Ix and Iy in each point of the plate. The magnetic flux with induction B crosses the plate on a normal. The operating zone of the magnetic flux is marked by the dotted line. If to sum up all the current components flowing in the magnetic flux operating zone, we will see that there are two components of the full current Ix and Iy in the zone. The correlation between these two components is determined by the conductive plate geometry. The current Ix, interacting with the magnetic flux creates the force Fy, directed along the axis Y. The current Iy, interacting with the magnetic flux, creates the force Fx, directed along the axis X. These forces will act between the plate and the magnetic field source, causing their mutual movement. Let’s suppose that the plate is immovable, and the source of magnetic field can move along the axis X. In this case, the action of the force Fy created by current Ix will be compensated in the bracket support of the magnetic field source. And the force Fx caused by the current Iy will create the necessary torque on some shoulder. The created force can be increased by a serial electrical connection of another analogous plate. This plate is assembled over (or under) the first one. Their surfaces have to be parallel and separated by an electrically insulating material. The scheme of plates connection is the following [3]: at two diagonally lying corner points of the plates there are contacts which lie on

Section II: Instrument Making diagonals of the same direction on the odd plates, and on the even plates – on diagonals of different directions. Each plate is connected to the neighboring plates into a series electrical circuit by using jumpers (pic.3).

When the direct electrical current flows along the pack of the plates a certain force will influence the moveable permanent magnet. This force depends on current quantity and mutual position of the magnet and the pack. The value of the force is proportional to the current. The dependence of mutual position of the magnet and the pack is given in the picture 5 [2].

F1-1×10-4

H 3

1

2

1

Pic.3. Pack of plates connection In this case the forces Fx created by the currents of both plates are summarized. And the forces Fy are deducted. The force Fx, is directed along the axis X and creates the necessary torque of the motor. Further increasing of this force is carried out by the additional installation and similar serial connections of new pairs of plates connected to the plates circuit. Finally, these plates represent a united pack in a constructional case. The example of the technical implementation of the offered torque motor is illustrated by picture 4 [4]. There are pack of plates 1, moveable permanent magnet 2 with lever and shaft 3.

0 0

5

10

L, см

Pic.5. The dependence of mutual position of the magnet and the pack of plates References 1. Kenjo, Takashi. Permanent magnet and brushless DC motors: Oxford, 1985. – 194 p. 2. Линейный двигатель с активным пакетным элементом/ А.Г. Иванова, В.М. Мартемьянов, И.А. Плотников// Приборы и системы. Управление, Контроль, Диагностика.-2010.№11.- С.36-39. 3. Иванова А.Г. Экспериментальная установка для исследования исполнительного устройства с пакетным элементом//Современные техника и технологии. Труды XVI Международной научнопрактической конференции студентов, аспирантов и молодых ученых.- Томск. Изд. ТПУ, 2010. Т.1. С.427-428. 4. Моментный двигатель. Патент РФ № 22378755: МПК Н02К 26/00/ В.М. Мартемьянов, И.А. Плотников, Е.Н. Горячок, А.В. Квадяева – заявл.01.12.08; опубл. 10.01.10, Бюл. №1-8с.

Pic.4. Torque motor

17

XVII Modern Technique and Technologies 2011

SCINTILLATION DETECTORS OF IONIZING RADIATION M.K. Kovalev Research advisor: P.V. Efimov Linguistic advisor: G.V. Shvalova Tomsk Polytechnic University, 634050, Tomsk, Russia e-mail: [email protected] INTRODUCTION Ionizing radiation enters our lives in a variety of ways. Ionizing radiation has many practical uses in medicine, nondestructive testing and other areas, but presents a health hazard if used improperly. Therefore it is necessary to develop advanced and effective methods the radiation registration. This area should be the sphere of interest for professionals working in the field of nondestructive testing and applying their research instruments based on the sources of ionizing radiation. Just the topic seems to be interesting for the military specialists, for airports security officers and those who had deals with other crowded places. There are not so much physical phenomena allowing the registration of radiation. Nevertheless, various instruments and devices are used for the detection of radiations, and the development of new detectors, recording equipments, and methods of data processing still remains an urgent task. In order to obtain the necessary information, ionizing radiation usually is converted by means of various detectors in the electrical signal which is further processed. Ionizing radiation can be detected by means of variety gauges based on different principles of operation: - Ionization counters: - Ionization chamber, - Proportional counter, - Geiger counter. - Particle track devices: - Cloud chamber, - Bubble chamber, - Spark chamber. - Scintillation counters: - Organic scintillator, - Inorganic scintillator, - Gaseous scintillator. The best known ionization converting device is the Geiger-Muller counter. This device consists of two parts, a detecting tube and a counter. The heart of the system is the detecting tube, which consists of a pair of electrodes surrounded by an ionizable gas. As radiation enters the tube, it ionizes the gas. The ions produced travel toward the electrodes, between which a high voltage is added. The ions cause pulses of current at the electrodes, which are picked up and recorded on the counter. [1] Other ionization counters also work on the principle of collecting information on ions

18

formed by the radiation passes through the detector. The radiation detection can take the form of devices which visualize the track of the ionizing particle. In its most basic form, a cloud chamber is a sealed housing containing a supersaturated vapor of water or alcohol. When an alpha or beta particle interacts with the gas mixture, it ionizes it. The resulting ions act as condensation nuclei, around which a mist will form (because the mixture is on the point of condensation). The high energies of alpha and beta particles mean that a trail is left, due to much number of ions being produced along the path of the charged particle[2]. Luminescent materials, when struck by an incoming particle, absorb its energy and scintillate, i.e. reemit the absorbed energy in the form of light.[3]. Scintillators can be made from a variety of materials, depending on the intended applications. This article takes a closer look on the scintillation detector. SCINTILLATION MATERIALS As mentioned above, a scintillator is the material which exhibits scintillation - the property of luminescence when substance of scintillator is excited by ionizing radiation.[4] Scintillator can be organic (crystals, plastics, or liquid) or inorganic (crystals or glasses). Scintillators can be also gaseous. Inorganic scintillators are usually crystals grown in furnaces at high temperature, for example, alkali metal halides, often with a small amount of activator impurity. The most widely used inorganic crystal is NaI(Tl) (sodium iodide doped with thallium). Some organic scintillators are pure crystals. The most common types are anthracene (C14H10), stilbene (C14H12), and naphthalene (C10H8). Anthracene has the highest light output among all organic scintillators and is therefore chosen as a reference: the light outputs of other scintillators are sometimes expressed as a number of percent of anthracene light. Plastic and liquid scintillators represent solutions of organic fluorescent substances in a transparent solvent. The most widely used solvents are toluene, xylene, benzene, phenylcyclohexane, triethylbenzene, and decalin. The most widely used

Section II: Instrument Making plastic solvents are polyvinyltoluene and polystyrene. Gaseous scintillators consist of nitrogen and the noble gases - helium, argon, krypton, and xenon, with helium and xenon receiving the most attention. The scintillation process has placed due to the de-excitation of single atoms excited by the passage of an incoming particle. This de-excitation is very rapid, so the detector response function must be quite fast. The most common used glass scintillators are cerium-activated lithium or boron silicates. Lithium is more widely used then boron since it has a greater energy release on capturing a neutron and therefore greater light output.[4] Scintillators are defined by their light output (number of emitted photons per unit of absorbed energy), short fluorescence decay time, and optical transparency at wavelengths of their own specific emission energy. The latter two characteristics set them apart from the variety of phosphors. The lower the decay time of a scintillator, that is, the shorter the duration of its flashes of fluorescence and, so, the less so-called "dead time" of the detector, the more ionizing events per unit of time it will be able to detect. Choosing the optimal combination of properties of the scintillator, create different types of detectors based on them. SCINTILLATION DETECTORS The first device which used a scintillator was built in 1903 by Sir William Crookes who used a ZnS screen. The scintillations produced by the screen were visible to the naked eye if viewed by a microscope in a darkened room, the device was known as a spinthariscope. Sensors of such a kind work on the principle of energy conversion of fluorescent bursts, resulting from the passage of ionizing particles through the scintillator into electrical energy. A scintillation detector or scintillation counter is obtained when a scintillator is coupled with an electronic light sensor such as a photomultiplier tube (PMT) or a photodiode. PMTs absorb the light emitted by the scintillator and reemit it in the form of electrons via the photoelectric effect. General form of the scintillation detector is shown in Figure 1.

Thallium doped Sodium Iodide NaI(Tl) is the most widely used scintillation material. It is used in many of the detectors, but its usage is not optimal. Their main drawback is hygroscopic nature, so these detectors should be tight, and that greatly complicates their manufacture and makes sometimes difficult to use. It is also difficult to obtain crystals of large size, respectively, sensors will also be small. While liquid scintillators can be of any size, but the capacity of chemicals are unsafe and difficulty transported. One of the newest developments is the glass scintillator which is sensitive to neutron radiation. The advantages of fiberglass are given below: - The only commercial alternative to pressurized gas tubes. - Large-area detectors with more “effective” neutron cross-section for receiving high sensitivity of detector. - Solid-state, flexible, more robust and safer than 3 10 He and BF3 gas tubes. - Neutron / gamma discrimination is more than 8500:1. - Low microphonics allow the operation during the transportation. No shipping hazard, carry-on or checked luggage on commercial airlines.[5] As seen from the above given characteristics, detectors based on scintillating glass win on many fronts. You can create very compact detectors and use their in mobile mode. Public places can be equipped with detectors of this type as they pose no risk to humans and human health. Application of these detectors is not limited to the above mentioned areas. CONCLUSION Several dozen of useful types of scintillators have been developed over the past fifty years, and these have involved a variety of scintillation mechanisms. One can believe that the further development of technologies for radiation detectors is needed to improve manufacturing processes in many areas. In particular the development of such a progressive method as scintillating glass fiber will help people in many cases, e.g. to protect their lives from damage in a variety of activities, including NDT. At any case, it is easier to prevent an accident, and the use of modern means of ionizing radiation detecting can help. REFERENCES

Figure 1. General form of the scintillation detector.

1. James R. Fromm “The Detection of Ionizing Radiation”, 1997. 2. Das Gupta, N. N.; Ghosh S. K. “A Report on the Wilson Cloud Chamber and its Applications in Physics”, 1946. 3. Lambert M. Surhone , Mariam T. Tennoe , Susan F. Henssonow “Scintillator ”, 2010.

19

XVII Modern Technique and Technologies 2011

4. Leo, W. R. “Techniques for Nuclear and Particle Physics Experiments”, 1994.

5. R. Seymour, C. D. Hull, T. Crawford, M. Bliss. “IAEA International Conference on Security of Material. Stockholm, 7-11 May 2001”

SYSTEM OF GAS FLOW CONTROL AND REGULATION Nazarova K. O. The scientific adviser Gurin, L. B., Ph. D., associate professor, Linguistic adviser Kozlova N.I., senior instructor Tomsk Polytechnic University, 634050, Russia, Tomsk, Lenina, 30 E-mail: [email protected]

Gas flow measurement in industrial installations is necessary for the Gas calculations with gas supply organization, as well as for internal control, determining unit costs of gas and accountability. Today, the most common method of metering the high costs of gas is the method of differential pressure. This method of measuring gas flow is realized in the information-measuring systems of many companies. Method of calculating the flow and the definition of uncertainty (error) flow measurement is normalized by standard “GOST 8.586.1,2,3,4,5 - 2005 Flow Measurement and amount of liquids and gases by means of orifice devices”. The method of differential pressure based on the creation of a measuring pipe with orifice local narrowing the flow of potential energy is transformed into kinetic energy. The average flow velocity at the point of restriction is increased, and the static pressure is less than the static pressure before the orifice. As the flow rate increases the pressure drop increases as well therefore it can serve as a substance flow measuring. When measuring flow by variable pressure drop of the liquid diaphragms are widely used due to their simple design, ease of assembly and disassembly. But these flow meters have some disadvantages. Flow measurement accuracy depends on the discharge coefficient, which is the ratio of actual flow rate to the theoretical value. It changes its value during the operation and increases the accuracy of flow measurement. Factors affecting the value of the expiry of the change are the geometric dimensions of the diaphragm, which may be caused by hydraulic shock in the pipeline, the inevitable dulling the sharp edge entrance aperture, surface roughness measuring pipeline, the distance between the local resistances in the measuring pipe, etc. The accuracy of flow measurement using orifice depends on the quality of their installation and the availability to them the estimated diameter of pipe

20

sections without additional sources of disturbance (burrs, welds, bends, tees, valves and fittings). The proposed system for measuring gas flow rate will be based on the application of test methods to improve the accuracy of the measurements and the theory of invariance in measurement technology [5]. Test method for increasing the accuracy of the measurement system is implemented in the gas flow shown in Figure 1.

Figure 1 - Structure of information-measuring system for measuring gas flow Invariant information-measuring system for measuring gas flow consists of measuring the pipeline, which established a narrowing device OP. Pressure drop across the constriction device is measured by the differential pressure sensor S. Measurement process consists of two cycles. In the first cycle the flow rate is measured by the pressure drop ∆p1 when the valve K is closed. The entire gas flow rate Q passes through a narrowing device. In the second cycle the valve is opened and the gas partially passes through q standard meter SOP. The output of differential pressure transducer ∆p2 value is formed proportionally to the difference between the Q - q. Then the valve is closed, and the values of q and ∆p2 are stored in the programmable controller, which implements the calculation of flow using the following formula:

Section II: Instrument Making  

∆

∆∆









(1)

∆ ∆

Equation (1) is invariant under the discharge coefficient, which allows not take into account the effect of disturbing effects such as blunting of the input edge, the roughness of the inner surface of the measuring pipe, the violation of the flow profile. An exemplary meter is connected periodically in the time interval between the inclusions discharge coefficient is constant. Opening and closing of the valve is controlled by programmable controller. Crane opens automatically after a certain period of time. The processed information can be displayed in the form of daily reports and printed at the operator’s workstation. The distinctive features of this system are less expensive items and no need for manual data entry, which eliminates the possibility of personal errors of the operator. The design of the main unit of the system flowmeter - can be implemented using several variants of the calculations of its parameters: - calculation of flow parameters for a given value of the upper pressure limit of differential pressure switch and the characteristics of the medium; - calculation of flow parameters for a given value of maximum pressure loss in the constriction device and the characteristics of the medium; - calculation of flow parameters for a given flow and pipeline, which provides the minimum uncertainty of the flow outcome measuring and the amount of substance; - calculation of the flow rate on the set parameters of the known orifice, as well as verification of the compliance with the requirements of GOST 8.586. 1, 2, 3, 4, 5 - 2005 with respect to the straight sections of pipeline and flow measurement in general (so-called inverse calculation of the flow). Main design relationship between the pressure drop ∆P, Pa on constriction device and the flow rate Q are determined by the flow equation:    ∗  ∗  2 ∗ ∆

(2),

where ρ – density of the medium before the 3



constriction device, kg/m ;   – the area of 2 flow section of orifice, m , d - diameter of the aperture; α – coefficient of discharge, formula (3,5) GOST 8,586.1–2005; ε – correction factor (compressibility factor), taking into account the expansion of the medium by lowering its pressure during the flow through the constriction device (for an incompressible medium ε = 1). Knowing that,   ∗  , where β - the relative diameter of the diaphragm (B.3, GOST 8.586.1–2005) and using formulas (5,4) (5,6) GOST 8.586.1–2005 and

(2) that the dependence of the diameter of the measuring pipe (Dp, mm) from the range of flow 3 rates (Q, m / h) is shown in Figure 2.

Figure 2 - The dependence of the diameter of the measuring range of values from the pipeline flow For a long time the research to identify and eliminate causes of errors in flow pressure differential has been carried out by Daev J. A. who suggested applying methods of the theory of invariance in measurement technology, allowing to increase the flow measurement accuracy [5]. In general, the increase in the accuracy of flow measurement is achieved by selecting the optimal balance between the range of the measured flow rate, pipe diameter and orifice, as well as by decreasing in edge blunting through the application of wear-resistant materials in the processing of edges or introducing them into the design of the orifice. References: 1. GOST 8.586.1 - 2005. Measuring the rate and quantity of liquids and gases by means of orifice devices. The principle of the measurement method and general requirements. - Intr. 01/01/2007. - Moscow: Publishing House of Standards, 2007. - 57 p. 2. GOST 8.586.2 - 2005. Measuring the rate and quantity of liquids and gases by means of orifice devices. Diaphragm. Technical requirements - entered. 01/01/2007. - Moscow: Publishing House of Standards, 2007. - 57 p. 3. GOST 8.586.5 - 2005. Measuring the rate and quantity of liquids and gases by means of orifice devices. Measurement techniques. - Intr. 01/01/2007. - Moscow: Publishing House of Standards, 2007. – 57 p. 4. Daev J. A., On the sharpness of the edge of the entrance aperture of gas flow measurement / devices and systems. Command, control, diagnostics. 2009. № 12. with. 29 – 30 p. 5. Daev J. A. Flowmeter system, eliminating the influence of the end // Electronic Journal "Oil and gas business." 2009. - Mode of access: http://www.ogbus.ru/authors/Latyshev/Latyshev_2. pdf/

21

XVII Modern Technique and Technologies 2011

CONSTRUCTIONS OF THE PRECISION GEARS WITH AN ELASTIC LOAD OF INCREASED DURABILITY Staheev E. V. Resarch supervisor: Yangulov V. S., DSc, T.S. Mylnikova, senior teacher. Tomsk Polytechnic University 30, Lenin Avenue, Tomsk, 634050, Russia, E-mail: [email protected] Application of the tooth gears as the part of the spacecraft reducers is considered to be a highly specific sphere. The main problem of these gears is nonserviceability over a long operating life (20 and more years). The word nonserviceability means no ability to ensure the specified accuracy for moving the output shaft. The open-circuit, which in its turn depends on the space between gearings, is considered to be the main factor influencing the functionality of the gear. The removal of the space for a long time period is a very hard implementing goal since the spacecrafts (shuttles as an example) cannot be maintained and no change of any of the tools is available. That is why the method for producing the reducers for spacecrafts has been suggested. Wave gears with intermediate bodies (WGIB) improve the reliability and durability of gears by increasing the hardness of work surfaces (more than 60 units of Rockwell) and the decrease in tension in the deformed element. The main advantage of the gear construction with intermediate bodies is that they provide a permanent and elastic load over a very long period of time. The experience in application of these types of gears has shown that they can be used with a guarantee within the period of up to 20 years and the error of the output shaft will not exceed 2 seconds of arc. Consider a few examples of the constructions of such gears.

Figure 1 shows a general view of the wave gear with the intermediate body of a serpentine spring. The inner ring of the flexible bearing of the generator in this gear simultaneously with its primary function performs the function of the elastic element, which creates an elastic load in the meshing of coils of the intermediate body with the teeth of a rigid gear. To perform this two stud-pints 2 arranged diametrically towards each other which are installed in the nest slots 3 connected to the input shaft of the gear are adjusted to the inner ring 1. Tightening of stud-pints 2 towards each other by screw nuts 4 deforms the ring 1 making it oval. Meshing of the coils of the intermediate body with the teeth of the rigid gear occurs along the long axis of the oval. Regulating the size of the ring deformation 1 (screwing/unscrewing screw nuts) we change the size of the mesh arch with the central angle of 2Ө. When the central angle of the mesh region 2Ө≥20˚ the elastic load between the intermediate bodies and the teeth of the rigid gear compensates the working surface deterioration.

Fig. 2. Wave gear with intermediate rolling bodies

Fig. 1 Wave gears with intermediate bodies with elastic ring flexible Bearings.

22

Figure 2 shows the wave gear with intermediate rolling bodies, where the outer ring of the bearing 1 and that of the generator 2 are made of several elements: a race 3, with radial grooves in which the elastic element 4 is installed and the outer split ring 5. The adjustment of the force of the elastic elements 4 is performed by the fitting of linings 6.

Section II: Instrument Making Between the elastic elements 4, and the ring 5 are placed the balls 7. In a free state the diameter of the ring 5 is equal to the calculated diameter of the generator. If there is clearance in the mesh the elastic elements 4 enlarge the ring 5 and press the intermediate bodies against the teeth of the rigid gear 8. The clearance a is very small if compared to the sizes of the intermediate bodies, therefore, it does not sufficiently effect the operation of the gear. If rolls are used as intermediate rolling bodies and the ends of the split rings have the shape of that shown in Fig. 2, the clearance is not expected to affect the operation of the gear. There are various types of such gears (some types can see below on Fig.3 and Fig.4). And they can all work for a long time without maintenance and changing the tools. Most of the companies have already implemented these programs in a variety of spacecrafts. The results of this implementation have proved to be successful. Consider another variant of the construction of the generator for creation of elastic load (Fig. 3). The generator is made as a flexible bearing and its inner ring 1 is an elastic element, which creates an elastic load in the meshing of the rotate bodies with the profiles of central gear teeth. To perform the elastic load the ring 1 has two studpints 2 with thread at their ends. Stud-pints are installed in the nest of the faceplate of the gear input shaft 3. When tightening the stud-pints 2 to the shaft 3 by the screw nuts 4 the inner ring of the flexible bearing and the outer ring 6 tighten the gear intermediate bodies to the profiles of the ring’s teeth through the balls 5 (are not shown in the Figure). The ring 1 is being deformed until the desired force for the specified elastic load in meshing is formed.

In the constructions of WGIB with intermediate rolling bodies (WGIRB) where balls are used as rolling bodies the outer surface of the generator ring is made conical. It makes it possible to change the diameter of the generator surface interacting with the balls under flexible elements force orientated towards the cone top. In the gear (Fig. 4) the flexible elements 1 are placed in the axis slot of the outer ring 2 of the generator bearing and are based on the axial bearing 3, installed in the gear body 4.

Fig. 4. WGIB with flexible elements in the inner ring of the generator bearing Refrences: 1. Зубчатые передачи повышенной точности и долговечности: монография / В.С. Янгулов. Томск: Изд-во Томского политехнического университета, 2008. – 137 с. 2. Во лновые перед ачи с промеж у т очными те л ами монография / В.С. Янгулов. - Томск: Изд-во Томского политехнического университета 3. Турпаев А.И. Винтовые механизмы и передачи. – М.: Машиностроение, 1982. – 224 с. 4. Янгулов В.С. Волновые и винтовые механизмы и передачи: Учебное пособие // Томск: Изд-во ТПУ, 2008, – С. 190.

Fig.3. Generator with elastic deformed inner ring of the bearing

23

XVII Modern Technique and Technologies 2011

MASTER-OSCILLATOR POWER-AMPLIFIER SYSTEM CONTROLLER Sukharnikov K.V., Gubarev F.A. Linguistic advisor: Nakonechnaya M.E. Tomsk Polytechnic University, 30 Lenin Avenue, Tomsk, Russia, 634050, e-mail: [email protected] V.E. Zuev Institute of Atmospheric Optics SB RAS, 1 Academician Zuev square, Tomsk, Russia, 634021 size capacitive discharge pumped CuBr laser with thyratron based excitation circuit. The power amplifier’s gas discharge tube length is 90 cm, tube diameter is 4 cm, maximum average lasing power in oscillator mode is 2.6 W (pumping power is 1.5 kW). Lasers are equipped with external triggering circuits including fibre optic receivers (the versatile fibre optic connection, HFBR-0501 series). The MOPA controller is connected with the lasers by optic fibre to avoid electromagnetic interference that is emitted by high-voltage power supplies. power amplifier 3

4

2

master oscillator

Copper and copper compound vapour lasers (CVLs) are commonly used sources of high power visible light. They have two output wavelengths at 510.6 nm (green) and 578.2 nm (yellow). CVLs are pulsed lasers operating at kilohertz pulse repetition frequencies. The pulse width is typically a few tens of nanoseconds. The average power of these lasers can range from units to more than thousands of watts of lasing power [1–3]. Low- and medium-power high-quality output beams of CVLs can be obtained by using singlelaser configuration. But there is a limit of highquality beam energy obtained in this case. The best solution of this problem at high power is the use of master-oscillator power-amplifier (MOPA) systems [1]. MOPA refers to a configuration consisting of a master laser and an optical amplifier to boost the output power. It is in principle a more complex system than a laser which directly produces the required output power, but it also has some advantages. With a MOPA system instead of a laser, it can be easier to reach the required performance such as wavelength tuning range, beam quality or pulse duration if the required power is very high. High efficiency of this system requires accurate matching and precision synchronization of a master-oscillator and power-amplifier. Hence, the main aim of the work is to design a timing device with precision delay adjustment, high frequency stability, good noise immunity and low supply power. We need that sort of a device to study CuBr laser with capacitive discharge excitation [4] in a power-amplifier mode. The MOPA system that the controller is made for is shown in Fig.1. The master-oscillator is a small-sized semiconductor-pumped CuBr laser with ~200 mW average output power and 5 mm beam diameter. The power-amplifier is a middle

1

Fig.1. MOPA system. 1, 2 – plane-plane resonator; 3 – deflecting mirror; 4 – average power meter A block diagram of the designed device is indicated in Fig.2. The frequency of controller's internal oscillator can be set in the range 5 – 70 kHz. It also has an external triggering input to control the MOPA system from a computer or another controller. The delay can be adjusted precisely from 0 to 100 ns. The pulse-width can also be adjusted.

delay circuit

pulse shaper

fibre optic transmitter driver

fibre optic transmitter

fibre optic link

MO triggering circuit

delay circuit

pulse shaper

fibre optic transmitter driver

fibre optic transmitter

fibre optic link

PA triggering circuit

basic frequency generator

24

Section II: Instrument Making Fig.2. MOPA controller's block diagram. MO – master oscillator, PA – power amplifier Precision timing in basic frequency generator, delay circuits and pulse shapers (Fig.2) are provided by CMOS timers ICM7555. They have improved performance in comparison with standard 555 timers, such as extremely low supply current and high-speed operation. We use a domestic power pack Robiton SN500S. The stability of supply voltage is provided by precision voltage regulator L7812. High noise immunity is achieved due to application of LC- or C- low-pass filters, absence of galvanic coupling with highcurrent circuits and metal protective shield. Initially the device has hand adjustment but the system may be enhanced to full computer control owing to modular build and external triggering input. Fig.3 shows the diagrams of current pulses of the master-oscillator and power-amplifier when the output average lasing power is maximal. The current pulses were registered with calibrated Rogowski coil probes and a digital oscilloscope LeCroy WJ-324. The output power was measured by power meter Ophir 30C-SH.

Fig.5 represents the dependences of output power (POUT) on pumping power (PIN) with HBradditive. The output in amplifier mode is about 30– 50 % higher than in oscillator mode (at PIN > 1 kW) which is typical for metal vapour lasers in whole.

Fig.3. Current diagrams of master-oscillator (1) and power amplifier (2). 1 – 10 A/div. 2 – 20 A/div.

Fig.5. PIN vs. POUT with HBr-additive. 1 – amplifier mode; 2 – oscillator mode; 3 – background radiation

The characteristics of the CuBr laser with capacitive discharge excitation in power-amplifier and oscillator mode were studied with the help of this device. The output power (POUT) curves versus pumping power (PIN) are shown in Figs. 4 and 5. In the first case (Fig.4) the amplifier curves were obtained without adding HBr to the active medium. As one can see, the output power in oscillator mode is greater than in amplifier mode at high pumping powers. This effect can be explained by not complete removing of inversion in the active volume because of narrow input beam. It is known that the addition of a small quantity of HBr to the active medium of CuBr laser leads to improving its output characteristics [3]. Therefore the information about the HBr influence on output in various modes of operation is urgent.

Fig.4. PIN vs. POUT without HBr-additive. 1 – amplifier mode; 2 – oscillator mode; 3 – background radiation

Thus, CuBr lasers with capacitive discharge pumping are suitable for use in high-power MOPA systems with their peculiar characteristics. A common property of the presented curves (Figs. 4 and 5) is a high level of background radiation which reaches 30 % of the maximum output both with and without HBr. It distinctly differentiates them from CuBr power amplifiers with traditional pumping (background ~ 10 %). REFERENCES 1. Little C.E. Metal Vapour Lasers: Physics, Engineering and Applications. – Chichester (UK): John Willey & Sons Ltd., 1998. – 620 p. 2. Marshall G. Kinetically Enhanced Copper Vapour Lasers: D. Phill. Thesis. – Oxford, 2003. – 187 p.

25

XVII Modern Technique and Technologies 2011 Laser Excited by a Capacitively Coupled Longitudinal Discharge // IEEE J. Quantum Electronics. – 2009. – Vol. 45. – No 2. – P. 171–177.

3. Evtushenko G.S., Shiyanov D.V., Gubarev F.A., High frequency metal vapour lasers. – Tomsk: Tomsk Polytechnic University Publishing House, 2010. – 276 p. 4. Gubarev F.A., Sukhanov V.B., Evtushenko G.S., Fedorov V.F., Shiyanov D.V. CuBr

MAGNETOMETERS TO DETERMINE THE VECTOR OF THE EARTH MAGNETIC FIELD A.N. Zhuikova Scientific adviser: A.N. Gormakov, Ph.D., docent, T.S. Mylnikova, senior teacher Tomsk Polytechnic University, 30, Lenin Avenue, Tomsk, 634050, Russia E-mail: [email protected] Magnetoelectronic devices for determining the direction of the magnetic induction are widely used in various fields of science and technology. However, the application of these devices was the most common in the design of the instruments to record the Earth's magnetic field (EMF) and orientation of various types of equipment on the plane and in the space relative to the direction of EMF. The properties of EMF when used in navigation and navigation-piloting systems allow determining the course and spatial orientation of the object. EMF (or geomagnetic field) at every point in space is characterized by the vector of tension T, which direction is determined by three components X, Y, Z (north, east and vertical) in a rectangular coordinate system (Fig. 1) or the three elements of the EMF: the horizontal component of the intensity H, magnetic declination D (the angle between H and the plane of the geographic meridian) and magnetic inclination I (the angle between T and the plane of the horizon).

Fig. 1. The components of the Earth magnetic field The Earth magnetism is due to the action of the permanent sources located inside the Earth and experiencing slow secular variations, and external varying sources in the Earth magnetosphere and ionosphere.

26

The magnetic compass is considered to be a well known example of EMF application. The accuracy of determining the direction of a simple compass makes 2–5°. The accuracy of the modern marine magnetic compasses in the mid-latitudes and in the absence of roll reaches 0.3–0.5 °. It is be noted that the precise positioning of objects on the Earth surface and in the space is a complicated technical problem to be solved with the help of magnetometer systems for the control of spatial position, taking into account many parameters. The transducer of the magnetic field (TMF) is the key element of any product of micromagnitoelectronics and a wide range of commonly used technical measuring devices. TMF converts the magnetic flux into an electrical signal [1]. A magnetically sensitive element is made of the material which changes its properties when exposed to the external magnetic field. To create a magnetically sensitive element you need to use a variety of physical phenomena occurring in semiconductors and metals, their interaction with the magnetic field [2]. Selecting the The type of TMF is chosen taking into account the required parameters of the equipment under design, the conditions for its operation and a number of economic factors (Table 1). When choosing TMF special attention is to be paid to the study of their orientation characteristics [1]. An important direction of the terrestrial magnetism application is considered to be the deviation survey in drilling. Inclinometers are used for the control of the complex angular parameters of the spatial orientation of directional and horizontal wells and well equipment. The objectives of the inclination are as follows: • To avoid overlaps with other wells; • To ensure the intersection of a killer well with a blower in case of ejection; • To identify the borehole deviation and calculate the degree of the curve of borehole;

Section II: Instrument Making • Comply with regulatory requirements. • To reach the geological objective of drilling; • To determine the assets; • To obtain data for their further application in the design of a reservoir; • Table 1. Characteristics of transducers of the magnetic field [1 – 5] № п/п

The type of TMF

Hall element with 1 high sensitivity

Minimal resolution µT

1 – 10

Power Dynamic range consumption µT mW

±100

10-50

±(0.2-1)

30-90

2

Thin Film magnitorezistor

3

Magnetic Induction 0.01 – 0.02 ±(1-200) Sensor

4 Ferrozond

5

Quantum (proton) magnetometer

0.4 – 0.85

0.0001 – 0.01

-6

10

±0.1

-6

5-50

3

10 – 10

The comparative analysis of TMF characteristics (Table 1) showed that the thin film magnitoresistors have several advantages to be applied for inclination wells. The only drawback at present is the low sensitivity which can be improved in course of mining technology development. References 1. http://www.detect-ufo.narod.ru

1-5

10-30

Advantages, disadvantages particular application Compactness, high reliability, wide dynamic range. Satisfactory magnetic sensitivity. Fast time constant. Good orientation characteristics. Good pairing with electronics. Operating temperature range: from -260 to +150 ° C. High cost. Compactness and high reliability. High magnetic sensitivity. Integrated technology combined with compensation and modulating coils. Fast time constant. Good orientation characteristics. Good pairing with electronics. Operational temperature range from -40 to +85 ° C. Limited dynamic range. Relatively low cost. Compactness and high reliability. High magnetic sensitivity. Fast time constant. Good pairing with electronics. Good orientation characteristics. Operational temperature range from -20 to +70 ° C. Limited dynamic range. Low cost. Very high magnetic sensitivity. Satisfactory orientation characteristics. Large sizes. Limited dynamic range. Low mechanical strength, inability to work in conditions of vibration and shaking. Considerably large inertia. The complexity of interfacing with electronics. Operational temperature range from -10 to +70 ° C. Considerable complexity and high cost. Very high magnetic sensitivity. High resolution to mechanical effects (shock, vibration). Good pairing with electronics. Poor operation speed compared with that of the magnetometer. Operational temperature range from -20 to +50 ° C Considerable complexity and high cost.

2. http://www.bakerhughes.ru/inteq/survey/ 3. http://davyde.nm.ru/magnit.htm 4. http://dic.academic.ru/dic.nsf/enc_physics/ 5. E.B. Aleksandrov, Atomic-resonance magnetometers with optical pumping (review). Research in the field of magnetic measurements, Ed. E.N. Chechurina. - Leningrad: Mendeleev Institute of Metrology. 1978. - Vol. 215 (275). - S. 3 - 10.

27

XVII Modern Technique and Technologies 2011

28

Section III: Technology, Equipment and Machine-Building Production Automation

Section III

TECHNOLOGY, EQUIPMENT AND MACHINE-BUILDING PRODUCTION AUTOMATION

29

XVII Modern Technique and Technologies 2011

ELECTRON BEAM WELDING (EBW). V.S. Bashlaev, A.S. Marin Scientific Supervisor: A.F. Knyazkov, docent. Linguistic advisor: V.N. Demchenko, senior tutor Tomsk Polytechnic University, Russia, Tomsk, Lenin str., 30, 634050 E-mail: [email protected] Dispersion: By analogy to light dispersion (separation the light of other colures), also as a dispersion are called the similar phenomena of dependence of distribution of waves of any other nature from length of a wave (or frequencies). For this reason, for example, the term the law of a dispersion applied as any wave process. Cathode: an electrode through which electric current flows out of a polarized electrical device. Anode: an electrode through which electric current flows into a polarized electrical device. Vacuum: a volume of space that is essentially empty of matter, such that its gaseous pressure is much less than atmospheric pressure. Soft vacuum, also called rough vacuum or coarse vacuum, is vacuum that can be achieved or measured with rudimentary equipment such as a vacuum cleaner and a liquid column manometer. Hard vacuum is vacuum where the MFP (mean free path of a particle is the average distance covered by a particle between successive impacts.) of residual gases is longer than the size of the chamber or of the object under test. Hard vacuum usually requires multi-stage pumping and ion gauge measurement. Torr: a unit of pressure that is equal to approximately 1.316 × 10-3 atmospheres or 133.3 pascals. At present with, it should be stated that electron beam technologies are among the most advanced means for modifying materials and media. They are widely used for research purposes. The range of their practical applications is extremely wide, from machine engineering, electronics and chemical industry to agriculture, medicine and environment protection. It should be stressed that E-beam technology is used for welding, melting, vaporizing and heat treating metals, and polymerization and cross-linking of organic materials and coatings. In this paper the process of electron beam welding will be considered. The process was developed by German physicist Karl-Heinz Steigerwald, who working at various electron beam applications, perceived and developed the first practical electron beam welding machine which began operation in 1958. One must take into account that electron-beam welding (EBW) is a fusion welding process in which the workpiece is bombarded with a dense stream of high-velocity electrons. Electrons are elementary atomic particles characterized by a negative charge and extremely small mass. It is that the

30

energy of these electrons which is converted to heat upon impact. Acceleration electrons to a high energy state to 30-70 percent light speed provides energy to heat the weld. Heating is so intense that the beam almost instantaneously vaporizes a hole through the joint. This process occurs under temperatures about 25 000 °C. Extremely narrow deep-penetration welds can be produced using very high voltages—up to 150 kilovolts. Deep penetration of heat allows welding of much thicker preparations, than probably with the majority of other welding processes. However, as the electron beam is exactly focused, the total heat input is actually much lower than that of any arc welding process. As a result, the heat-affected zone is small, and the effect of welding on the surrounding material is minimal. Almost all metals can be welded by the process, but it is more often applied in stainless steels, superalloys, and reactive and refractory metals welding. Besides, the process is used to weld a variety of dissimilar metals combinations. Welding of automobile spare parts, space equipment, jewelry and semiconductor is made by this method. In the picture 1 a structure of the electron beam gun is shown. The gun receives electric energy from a high-voltage source of direct current. The electron beam gun used in EBW produces electrons as well as accelerates them, using a hot cathode emitter made of tungsten that emits electrons when heated. The cathode and the anode provide such structure of the electric field between them which focuses electrons in a bunch with diameter equal to the diameter of an aperture in the anode. They pass through the anode at high speed and they are then directed to the workpiece with magnetic forces. Electrons have identical charge and make a start one after another. Therefore a beam leaing the anode focuses a magnetic field in a focusing coil to prevent the increase of a bunch diameter and reduce of energy density. The dense bunch with high speed strikes in the small sharply limited platform on the product and heats up metal to high temperature. Electrons transform metal beneath the beam from molten state to gas, allowing the beam to travel deeper and deeper. As the beam penetrates the material, the small gas hole produced closes rapidly, and the surrounding molten metal fuses, causing minimal distortion and heat effect outside the weld

Section III: Technology, Equipment and Machine-Building Production Automation zone. The device is placed in the vacuum chamber. [1] Picture1 – Electron Beam Gun It must be noted that the quantity of heat and penetration depth depend on several variables. The cores from them are number and speed of electrons, diameter of an electronic beam and its speed. Greater beam current causes the increase in heat input and penetration, while higher travel speed decreases the amount of heat input and reduces penetration. Besides, the diameter of a beam can be distinguished. If the focus is located above the preparation surface, the width of welding increases, but penetration decreases. And if the focus is located below the preparation surface, the depth of welding increases, but the width decreases. Moreover, the process of electron beam welding is divided into three methods. Each of these methods is applied in certain welding environment. The first method developed requires that the welding chamber must be in hard vacuum. This method allows to weld workpieces to 15 sm thickness, and the distance between the welding weapon and the workpiece can be even 0.7 m. The second method gives a chance to perform EBW in soft vacuum, under pressure of 0.1 torr. This allows to use larger welding chambers and reduce the time and equipment required to attain evacuate the chamber. But this reduces the maximum stand-off distance by half and decreases the maximum material thickness to 5 sm. The last method is called nonvacuum or out-of-vacuum EBW, because it is performed at atmospheric pressure. The distance between workpiece and an electron beam gun is lowered to 4 sm, and the maximum thickness of the material is 5 sm. The third method is good because the size of welded workpiece has no value in the absence of the welding chamber. Advantages and disadvantages of the process will be considered further. The advantages of Electron Beam Welding, which are the following: [2] • Total energy input is approximately 1/25 of conventional welding energy • Low heat input results in minimal distortion • Close tolerances • Deep welding of workpieces with extremely limited heat-affected zones

• Repeatability of weld parameters job to job, lot to lot • High-strength weld integrity (clean, strong and consistent) • No fluxes or shielding gases to affect the properties of the weld • Penetration control to 10% welding in vac. 1x10-TORR, producing contamination-free welds • Joining of similar and dissimilar metals • Cost-effective joining meets difficult design requirements and restraint • Welding in hard reached areas with other processes • Magnified optical viewing for additional welding accuracy (20-40x typical) The EBW Limitations: [3] • High equipment cost • Work chamber size constraints • Time delay when welding in vacuum • High weld workpieces costs • X-rays produced during welding • Rapid solidification rates can cause cracking in some materials In the conclusion it would be desirable to sum up and once again to underline clear advantages of EBW. This technology is a reliable and cost effective method of joining a wide range of metals. From pacemakers used in the medical industry to sensors used on fighter aircraft, Electron Beam applications are almost limitless. The welding is performed in a vacuum, therefore welds are clean and free from oxidation. Due to the extreme density and precise control of the electrons, high weld 'depth to width' ratios can be achieved; up to 20-1 is obtainable. With minimal or no distortion, Electron Beam Welding is often the final step of a production sequence. Referents 1. The collection of proceedings of students of Russia [Electronic resource] http://www.csalternativa.ru/text/2052 2. Official site company Acceleron Inc. [Electronic resource]http://www.acceleroninc.com/ebweld/ebw eld.htm 3. Site of Welding Procedures and Techniques http://www.weldingengineer.com/ 1%20Electron%20Beam.htm

31

XVII Modern Technique and Technologies 2011

PROBLEM OF TRANSFERRING ELECTRODE METAL Bocharov A.I. Scientific Advisor: Kobzeva N.А. Tomsk Polytechnic University, 634050, Lenina av. 30, Tomsk, Russia E-mail: [email protected] Introduction Welding is a fabrication or sculptural process that joins materials, usually metals or thermoplastics, by causing coalescence. This is often done by melting the workpieces and adding a filler material to form a pool of molten material (the weld pool) that cools to become a strong joint, with pressure sometimes used in conjunction with heat, or by itself, to produce the weld. This is in contrast with soldering and brazing, which involve melting a lower-melting-point material between the workpieces to form a bond between them, without melting the workpieces. Many different energy sources can be used for welding, including a gas flame, an electric arc, a laser, an electron beam, friction, and ultrasound. While often an industrial process, welding can be done in many different environments, including open air, under water and in outer space. Regardless of location, however, welding remains dangerous, and precautions are taken to avoid burns, electric shock, eye damage, poisonous fumes, and overexposure to ultraviolet light. Until the end of the 19th century, the only welding process was forge welding, which blacksmiths had used for centuries to join iron and steel by heating and hammering them. Arc welding and oxyfuel welding were among the first processes to develop late in the century, and resistance welding followed soon after. Welding technology advanced quickly during the early 20th century as World War I and World War II drove the demand for reliable and inexpensive joining methods. Following the wars, several modern welding techniques were developed, including manual methods like shielded metal arc welding, now one of the most popular welding methods, as well as semi-automatic and automatic processes such as gas metal arc welding, submerged arc welding, flux-cored arc welding and electroslag welding. Developments continued with the invention of laser beam welding and electron beam welding in the latter half of the century. Today, the science continues to advance. Robot welding is becoming more commonplace in industrial settings, and researchers continue to develop new welding methods and gain greater understanding of weld quality and properties.[1 p. 95] Problem of welding But, unfortunately, like any process, welding is not without problems. One of these problems is splashing of electrode metal. This leads to a decrease in the quality of the weld and the loss of electrode wire. This is especially affected by arc welding. To fix this

32

process, we must understand how the process of transferring a drop in the weld pool. Arc welding is a type of welding that uses a welding power supply to create an electric arc between an electrode and the base material to melt the metals at the welding point. Small-droplet metal transfer can be implemented in any position. However, in practice, the use of small-drip and spray transfer is limited only by welding in the down position, because despite the fact that when welding in the vertical and overhead positions of all the drops reach the weld pool, the latter flows down due to excessive size. Due to the fact that this type of transfer requires the use of high welding current, which leads to high heat input and high weld pool, it is not acceptable for welding sheet metal. It is used for welding metals of large thickness (typically greater than 3 mm thick), especially when welding heavy steel and shipbuilding. The main characteristics of welding process with a fine-droplet transfer are: the high arc stability, lack of spatter, moderate formation of welding fumes, good wetting the edges of the seam and high penetration, smooth and uniform surface of the weld, the possibility of welding at higher modes and a high deposition rate. Because of these advantages of small-droplet metal transfer is always desirable, where its application may, however, it requires a strict selection and maintenance of welding process parameters.[2 p. 62] This problem occurs at each facility where the welded structures are used. For example at the plant "Voshod" in the Novgorod region produces weldments (farms). Ways to small-droplet metal transfer I want to offer a couple ways to fix this problem. 3.1) Raising the voltage of arc welding. This method allows for the transfer of electrode metal into the molten pool in small drops or spray. Maybe it's because when the voltage of the arc current increases electrodynamics force. When it reaches a critical value, we needed to begin the transfer process. This is a simple way which does not require excessive investment because all modern welders are designed to increase the value of current strength. But this method has two significant disadvantages: Increased consumption of electricity; Increased consumption of protective gas. These are serious drawbacks because these resources are very expensive.

Section III: Technology, Equipment and Machine-Building Production Automation 3.2) Establishment of a pulsed power generator as an option. This method involves the installation of additional equipment. The generator transports the metal electrode with short current pulses. They occur with great frequency, so the rate of filling of the joint is not only not reduced but increased. This is undoubtedly a positive feature of this method. Another positive feature is that the current value does not reach critical values. This means in turn that the welding seems to be free from defects associated c burnout of the metal. Indisputable advantage is reduced energy costs and the ability to migrate from expensive Argon to cheaper Carbon Dioxide. This method also has drawbacks including: a) The high cost of the device b) The relatively large dimensions of the device Sometimes a great price can scare even the not very greedy man.

Conclusion Let's sum up. There are two methods of a solution of a problem of carrying over of electrode metal on a choice. The first does not demand initial capital investments, but more expensive in the subsequent use. While the second more expensive at the initial stage, but leads to benefit further. Recommendation As future welding engineer I recommend the second variant. For me observance of technological processes at manufacturing what or production is very important. Practice shows that to 80 % of premature breakages arises because of infringement of conditions provided by technological processes. References 1.) Lebedev VK trends in power sources for arc welding. Automatic welding, 2004. 2.) Sheiko PL transfer of the metal at duovoy welding consumable electrode. Thesis 1976.

THE USE OF AC FOR CONSUMABLE ELECTRODE TYPE ARC WELDING I. Kravtsov Scientific Supervisor: Assistant Professor A. Kiselyov Tomsk Polytechnic University, Russia, Tomsk, Lenin Avenue, 30, 634050 E-mail: [email protected] Abstract

task of the paper is to highlight the new brand trends in welding over the conventional one.

The paper provides an insight into completely new welding techniques developed specifically for the dip transfer method, which until now has been notoriously difficult to work with. Based on commonly assumed hypothesis of process stability, metal transfer, weld quality and other relevant characteristics, the distinct advantages over the conventional arc welding processes are presented. The matter of application possibilities is also considered within the paper. The first thing to be noted is that the highly efficient, reliable and long precise processes has always been targeted in the research work carried out in the industries and at the research institutes. Nowadays, the above requirements dictate the appearance of innovative welding solutions, which would be successfully resolving long-standing problems in the welding industry. New approach is the polarity inversion with additional integrated technique for better process control. Up to recently, this way seemed to be quite inconsistent and ineffective in many cases, particularly for consumable electrodes. More detailed studies have yielded positive results [1, 2, 3]. Thus, the

Fig. 1 Process sequence with two positive and two negative cycles A significant innovation is the fact that the change of polarity is carried out during the shortcircuit phase.

33

XVII Modern Technique and Technologies 2011 In the conventional short-circuit process, the wire advances until a short circuit occurs. At this moment the welding current rises, which allows the short circuit to open and ignite the arc again. However, two issues make short-circuit welding problematic in certain circumstances. First, a high short-circuit current produces high heat. Second, the act of opening that short circuit is uncontrolled, producing spatter. So, what we have within the new approach. Fig. 1 shows a process sequence with two positive and two negative cycles. The positive phases (EP) mainly influence fusion penetration and the cleaning effect. The negative phases (EN), on the other hand, considerably increase deposition rate at the same level of energy input. Consequently, given an identical average welding power, a negative wire electrode melts a considerably greater amount of wire than a positive one. The polarity changes between the two phases of the process at the start of the short circuit [4]. There is no arc exists at this time, as the filler metal is in contact with the weld pool. The result is obvious: At the moment the polarity changes, extremely high process stability is guaranteed. [5] Droplet size before a shot circuit clearly reflects the influence of polarity on deposition rate. [6] In the negative phase the droplet size significantly leading to respective increase of deposition rate. Impact of electromagnetic force on the electrode with positive polarity at the same current amplitude is significantly higher than that of the electrode with the negative polarity. Electromagnetic forces assist the droplet detachment and, therefore, are preventing from the growing deposition rate. It allows the user to adjust both the number of consecutive positive or negative current pulses or phases at will, thus, making it possible to control the deposition rate. It is possible to perceive through the analysis of the work [1] that there is a clear tendency to reduction of penetration with the increase of the %EN. This fact occurs because the %EN increase represents a longer negative polarity time and in that condition the heat is more concentrated in the electrode. Consequently, less thermal contribution in the base metal represents less penetration. The same argument is used to explain the tendency of width reduction observed in the above work [1], mainly for the 50 to 70% EN variation. Because the lower heat input, which occurs with the increase of the % EN, makes the wetting and melting of the base metal more difficult. Hence, the molten metal tends to concentrate on the surface of the pool increasing the reinforcement with the % EN increase. Weld quality is strongly influenced by process parameters. For this reason, special attention is paid to the quality of weld joints processed with the use of alternating current. A set of experiments has been carried out with the aim to compare the both mechanical and structural characteristics of welds

34

to be joined using direct and alternating current as well. As a positive example, 18-10 type steel welded joints was analyzed with respect to changes in the whole range of properties. [6] The complete results of mechanical testing of trial butt joints made up a table 1. It can be seen that tensile strength of welded joint assisted by DC can be considered as comparable enough with AC results; however, impact strength of weld joint is even higher in case of AC. One point to note is the fact that the experimental procedure held with AWS E308-16 electrodes normally used for DC electrode-positive polarity. Table 1 - Effect of welding current type on weld mechanical properties Current type Tensile strength, KCV, J / cm 2 DC

603,3...651,1 634 ,3

AC (optimized modes of arc stabilization)

594 ,5...600 ,1 595,8

96 ,8...124,1 107 95,8...126 ,5 112,4

Metallographic studies have shown that the structure of weld metal of the two samples is almost identical and represented an austenite finegrained structure with a small amount of δ-ferrite. Indicators of micro hardness of metal welded joints produced by an alternating current are more significant. The application of AC has found its industrial implementation not even in GMAW but also, that is really essential, in SAW by means an adjustable square wave transformer for high efficiency welding. [7] The square wave technology avoids any arc blow effect caused by multiple arc currents as well as arc outs in AC zero transfer. The heavyduty technology ensures maximum lifetime in continuous operation with minimum maintenance. Based on numerous experimental results and discussions, conclusions were drawn as follows: It is possible to state that new technology meets all standard and even greater requirements specified; moreover, it argues the fact that the use of AC for consumable electrode type arc welding is possible and competitive due to its weighty advantages. The metal transfer process of the new process is very stable and the arc heating behavior is changed based on the special wave control features. By deliberately selecting the polarity of the welding current, new technique opens up new ways of joining metals even "colder". The welder can achieve the same deposition rate with a lower heat input. The extremely stable arc dramatically reduces unwanted side effects and therefore increases the reliability of the process. By reducing heat input, the process improves weld quality by reducing distortion and spatter. Improved weld quality reduces post-production rework, leading to

Section III: Technology, Equipment and Machine-Building Production Automation an increase in manufacturing and efficiency. New technology has the potential to become an independent enough. The full extent of the potential in this area cannot be foreseen at present. References [1]. Vilarinho, L. O., Nascimento, A. S., Fernandes, D. B., and Mota, C. A. M. — Methodology for Parameter Calculation of VPGMAW// Welding Journal, USA, 2009, vol. 88, no 4, [Note(s): 92.s-98.s] [2]. Ueyama, T., et al. 2005. AC pulsed GMAW improves sheet metal joining// Welding Journal, USA, 2009, vol. 84, no 2 p. 40–46. [3]. Harwig, D. D. et al. 2006. Arc behavior and melting rate in VP-GMAW process //Welding Journal 85(3): 52s to 62s.

[4]. Pat.1292959 SU. MPK8 G01N 29/04. Short-Circuit Arc Welding Process Using A Consumable Electrode and its welding support system / Kiselyov A.S. Application Number: 3931790/25-27 Publication Date: 28/02/1987 [5]. Fronius Company. REACHING THE LIMIT OF ARC WELDING? // The Maritime Executive Magazine. URL: http://www.maritimeexecutive.com/pressrelease/latest-news-froniusinternational-gmbh/. Retrieved on Jan 27, 2011. [6]. Shatan A.F., Andrianov A.A., Sidorets V.N. and Zhernosekov A.M. — Efficiency of stabilisation of the alternating-current arc in covered-electrode welding // Avtomaticheskaya svarka, Ukraine, 2009, no. 3 p. 31-33 [7]. Egbert Schofer — A complete and reliable partner for pipe mills// Svetsaren. The ESAB Welding and Cutting journal vol..63 no.1 2008, p. 29

WEAR-RESISTANT COATING A. M. Martynenko, A. S. Ivanova, S. G.Khromova Scientific adviser: A. B. Kim, assistant professor Tomsk Polytechnic University, 634050, 30, Lenina St., Tomsk, Russia E-mail: [email protected] Currently, the market of technology improving has an important task in the hardening coatingmethod on the cutting edge.The aim of the task is to find the best method and spattering composition which satisfysuch aspects as the increase of wear resistance and thermal conductivity of instruments, the effective chipremoval, the wide range of application, low cost, etc. While discussing the wear-resistant coating, two methods are usually mentioned:the method of chemical vapor deposition (CVD)and physical vapor deposition method for coating (PVD). The latter iswidespread in Russia. The CVD-method was developed in Sweden. Here such chemical agents asTiCl4, NH3 are used. The chemical vapor deposition process is characterized by the increased speed on the sharp areas of the product surface. As the coating thickness increases, the adhesionrapidly reduces. For instrumental applications theСVD-method means that the thick and easy to break off coating layer takes the place of the cutting edge. It’s possible to avoid it by roundingthe cutting edge before coating.The minimum size of rounding is 20 microns,the typical value for modern plates is 3550 microns. Such edge preparation is desirable for plates intended for the rough and semifinishturningand milling work. However, the

cutting edge should be sharp for some instruments. The second method is PVD. The PVD (Physical Vapor Deposition) or CIB (condensation with ion bombardment) coatings are the development of Soviet and later Russian scientists. It is worth noting that this method takes the leading position in the wear-resistant coatings methods rating. The given method of coating is realized by using titanium nitride TiN, titanium carbide TiC, titanium carbonitrideTiCN, aluminum oxide Al2O3. The popularity of the new coating method was determined by the fact that PVD improves successfully the properties of the cutting tools, when the CVD technology is ineffective or useless. Firstly, PVD is realized at lower temperatures under 500O C. It allows to coatboth hard-alloy plates and tools made of high-speed steels, and even machinecomponents operating at intense friction. Secondly, the PVD coating can be applied to a sharp edge. Because of the steady rate deposition this coating does not cause the edge nose. Thus, this type of coating can be successfully used for small-sized point tools. At the same time, a thin PVD coating layer cannot compete with more strong CVD coatings.Their total layer thickness can run up to 22-25 microns, that’s why CVD coatings are also widespread in spite of its high price. However, science is still in progress. During the last decade various coatingcombinations with thin

35

XVII Modern Technique and Technologies 2011 external solid lubricating coatings (for example TiAlN and MoS2) were developed and widely used. Such coatings provide an effective chip removalandgoodtool bedding. The developments of various amorphous carbon coatings are dynamically carried out. Diamond-like coatings (DLC) havea low friction factor and high wear resistance. The obtained carbon nanofilms have the properties similar to ones of a diamond. Such coatings have very high abrasive wear resistance which outperformsother types of coatings by 50 times. Unfortunately, their temperature stability and oxidation resistance are limitedto 300O C which is not enough for most metalworking with the exception of aluminum and silumin cutting. But, due to its abrasion resistance, DLC show good results at the cutting of various composite materials based on glass- and carbonfilled plastics which are widely used in engineering. To determine the efficiency of cutting tools with a wear-resistant coating, it is necessary to define the mechanisms of a runoutappropriate to a particular treatment process.The runout of the cutting tool working surfaces depends on the physical, mechanical and chemical properties of the coating and the work metal. One can point out three main mechanisms of tool degradation which take place directly in the contact zone with the work surface. • The abrasive wear of a side face by solid impurities influencing the tool surface • The diffusion wear which is defined byinterdiffusion processes of the tool and work material. It takes place with a carbide solution with the following direct diffusion solution of dissociation elements in the work material. At higher temperature the tool material “dissolves” in the chip and is “removed”. • So-called adhesion-fatigue wear which is determined by the work material type and the friction factor in the contact zone. The repeated cycle occurrence and adhesion bond openingload the front part of the tool which leads to crack formations in the tool edges. The cutting tool is exposed to all above-listed types of wear. Its application as the obstacleof the diffusion and adhesion wear of chemically inert highly consistent carbide-, nitrideand carbonitride-based coating can improve multiplythe wear resistance and life time. The cutting speed, load distribution on contact surfaces and cutting emulsion define the cutting temperature, contact stress, chemical reactions in the cutting zone and the existence of diffusion processes between the tool and work metal. The reason of the tool breakdown is the temperature (the cutting speed). High-speed metalworking leads to thereductionofheatsink in the tool and the increase of chip heating. Typically, the temperature of the work metal (including fine shavings) and the tool increases with increasing of the cutting speed. However, with the sufficiently

36

high treatment speed (defined for each material, tool and work metal) the temperature of the cutting edges remains almost unchanged.Up to 70 per cent of the heat generated in the contact zone is removed with chips and the heat transfer in a metal workpiece and tool is minimal. Protective coatings can significantly reduce the heat and provide high-speed processing at relatively low temperatures. The use of high solid coatings which ensure the temperature reduction in the cutting area by reducing the friction factor and good heatsink can lower the temperature of cutting. TiAlN (50/50TiAlN, 30/70 TiAlN, etc.) coatings are widely used. In many cases they provide a significant increase in service life and process rate without using cutting emulsion.The cutting emulsion results in high-amplitudetemperature fluctuationwhich has a bad influence on the mechanical properties of the tool.The advantage of these coatings consists in that they maintainhigh hardness at higher temperatures. Moreover,they have thelow (compared to the coating of titanium nitride) frictionfactor, and alsooxidation resistance at higher temperatures (up to 700°C) and relatively high thermal conductivity. It provides better heat sink and prevention of the coatingdescalingin a continuous cuttingmode.

fig. 1 Hobs and shaper cutters Certain requirements should also be presented to the cutting tool. The cutting tool must be strong enough, resilient and heat-resistant, and has high cutting edge hardness, the hardness of the material, a high adhesion and abrasive wear resistance. It is worth applying wear-resistant chemically inert coating on high-speed steel and, in particular, on the very hard high-strength sintered – tungsten-carbide and titan-tungstencarbide (TC) tool. According to the experiment, the TiN coating compared with TiC coating wears out faster when processing cast iron, but it is more stable at higher speeds carbon steel and other material processing. At high loads on the cutting edge the nanostructured coatings providethe great

Section III: Technology, Equipment and Machine-Building Production Automation advantages in the manufacture of cutting tools. Superdispersedmaterials with the increased area of grain boundaries have a more balanced relationship between the hardness, which positively influences the durability, and strength characteristics of the material. In nanomaterials obstructing of the cracks movement and branching occurs due to hardening of the grain boundaries. The coating creation for cutting tools of the new generation is most effective by using the innovative concept of nanometric structure and alternating layers of nanometer thickness of different composition structure and functionality.

References 1. Износостойкие покрытия как движитель инновационного процесса в технологии инструментальных материалов и современной металлообработке / 20.04.2010Максимов Михаил http://popnano.ru/analit/index.php?task=view&i d=1150&limitstart=0 2. Покрытия для режущего инструмента В. Титов, к. т. н.Научнотехнический центр компании «ГлобусСталь» http://www.rmo.ru/ru/nmoborudovanie/nmoboru dovanie/2004-/26_29_nmo_1_04.pdf 3. Coat, please. Susan Woods, associate editor. October, 2004. http://www.ctemag.com/pdf/2004/0410CoatPlease.pdf

SOFTWARE FOR MATHEMATICAL MODELING OF WELDING PROCESS. Mishin M.A Scientific Supervisor: Krektuleva R.A., docent. Linguistic advisor: Demchenko V.N. Tomsk Polytechnic University, Russia, Tomsk, Lenin str., 30, 634050 E-mail: [email protected] Abstract This article is about new software that can be used in welding industry. It describes software products, their main properties and features. The article will be of interest to students and professionals involved in welding processes modeling. Key words – welding process, mathematical models, software, physical process, thermal problem. Introduction Currently, computer technology allows to make a great number of calculations almost instantly. The technique has become vital in mathematical modeling of processes, mechanisms, events, etc. These models are widely applied in the production the process, as they help identify and describe properties with sufficient accuracy, control the process, quality, etc. The mathematical model describing the welding process is one of them. There are many platforms which simulate processes of heat transfer, mass transfer, X-rays and other radiations, in general, all physical processes. Welding includes all these processes, therefore it requires high-quality software to show all known physical processes a pictorial form. This article reviews present day programs of mathematical modeling of welding processes.

Describtion of programs for welding modeling Today Ansys, MSC.Sinda, MEZA, MSC.Marc, MatLab are the most popular programs. ANSYS is a finite element analysis software package, solving problems in various fields of engineering (strength of structures, thermodynamics, fluid dynamics, electromagnetism), including interdisciplinary analysis. ANSYS is a versatile finite element software package (the developer is the company ANSYS Inc.), which allows to solve a wide range of tasks of a single user environment (and, what is more, on the same finite element model) in the areas of: • Strength; • Heat; • Fluid dynamics; • Electromagnetism; Interdisciplinary bound analysis combines all four types; design optimization is based on all the types of analysis mentioned above.[1] MSC Sinda is a general-purpose software package for solving thermal analysis of structures, analysis of radiation level influencing the design, simulation and evaluation of thermal stresses arising in the product during the operation, etc. The complex MSC Sinda is the industry standard in the field of complex thermal

37

XVII Modern Technique and Technologies 2011 calculations using finite difference method and the construction of thermal RC-network as well as thermal substructures (super-elements) for radiation analysis.[2] The field of MSC Sinda application is very extensive. Due to its capabilities software package MSC Sinda is used in a variety of industries to solve complex problems of thermal analysis of structures: • electronic equipment (from individual devices to complex systems); • equipment for electronic circuit boards processing; • components of car engines, aircrafts, etc.; • cooling systems and air conditioning; • thermal losses of buildings and structures; • spacecrafts, launch vehicles, control unit; • solar panels; • energy sources, fuel cells, generators; • electronic devices, avionics; • small and large household appliances. MSC.Marc systems are the most powerful and advanced tools used to address these problem. The basis of process simulation is the application of nonlinear finite element analysis method. Its results can be presented both in numerical and graphic forms. MSC.Marc enables the user to solve a wide range of problems of structures analysis, processes and contacts using the finite element method. These procedures provide a solution to simple and complex, linear and nonlinear problems. The analyst has a graphical access to all components of the interface of MSC.Marc Mentat and MSC.Patran. MSC.Marc also includes parallel processing of complex tasks using a new method.[2] MEZA is designed to calculate various thermal problems, with different functions of external influences. The calculation is carried out using an explicit difference scheme.[3] Models consisting of several materials can be calculated. The program supports up to 31 materials in one sample. The program offers the possibility change of materials in an accessible form. (Parameter changes do not result in changes in base materials, but only in changes for this process) The function of external influences, i.e. heat sources are supported by the program in the form of shared libraries. After connecting to the abovementioned libraries outside influences, individual for each source, become available in the program options. You can view the isotherms in any section of the sample perpendicular to one of the axes, as well as viewing the phase formation and the temperature in each point of the sample.[4]

38

The possibility of constructing threedimensional graphs is supported by: 1. Sample surface temperature. 2. Isosurface of sample temperature. 3.The general sample configuration The user is also given the opportunity to select an unlimited number of control points in the sample. Each checkpoint accumulates a statistical data processes occurring in it: the temperature change during the process time, phase transitions, the change in the rate of growth temperature increase.[5] MATLAB is a software package for solving technical computing. In welding industry this software package is used because it helps calculate linear and nonlinear transient thermal problems. This becomes especially relevant, because until recently the calculation was made only in fixed systems, but it is not entirely correct. Obtaining more accurate data helps clarify the mode parameters, the effect on properties and quality of compounds.[6] Conclusion In conclusion it is necessary to emphasize the importance of learning software for welding production specialists. In the new age of information technology, where the computer has become an integral part of human introduction in automated welding lines, as well, welding software is a hot topic for study and research Referents 1 Official site program products ANSYS [Electronic resource] http://www.ansys.msk.ru 2 Official site program products MSC.Sinda [Electronic resource] http://www.mscsoftware.ru 3. Никифоров Н.И., Кректулева Р.А. Математическое моделирование технологического процесса кислородной резки//Сварочное производство, 1998. - №4. – С. 3-6 4. Кректулева Р.А., Бежин О.Н., Косяков В.А. Формирование тепловых локализованных структур в сварном шве при импульсно-дуговой сварке неплавящимся электродом.// ПМТФ, 1998. - №6. – с.172-177. 5. Бежин О.Н., Дураков В.Г., Кректулева Р.А. и др. Компьютерное моделирование и микроструктурное исследование градиентных композиционных структур, формирующихся при поверхностной электронно-лучевой обработке углеродистой стали / В сб.: Экспериментальные методы в физике структурно-неоднородных конденсированных сред: Тр. 2-й международный науч.-техн. конф. – Барнаул, 2001. – с. 22-28 6. Official site program products Matlab [Electronic resource] http://www.matlab.exponenta.ru

Section III: Technology, Equipment and Machine-Building Production Automation

RESEARCH OF STRUCTURE AND PROPERTIES OF LASER WELDED JOINT IN AUSTENITIC STAINLESS STEELS. Oreshkin A.A Scientific Supervisor: Cand. Sc. Haydarova A. A. Linguistic advisor: Demchenko V.N. Tomsk Polytechnic University, Russia, Tomsk, Lenin str., 30, 634050 E-mail: [email protected] Abstract Fusion zone shape and final solidification structure of different types of austenitic stainless steels with different thicknesses were evaluated as a function of laser parameters. Both bead-on-plate and autogenous butt weld joints were made using a carbon dioxide laser with a maximum output of 5 kW in the continuous wave mode. Based on metal thickness, laser power, welding speed, defocusing distance and the type of shielding gas combinations should be carefully selected so that weld joints should have complete penetration, minimum fusion zone size and acceptable weld profile are produced. Introduction C02 laser beam welding with a continuous wave is a high energy density and low heat input process. The result of this is a small heat-affected zone (HAZ), which cools very rapidly with very little distortion, and a high depth-to-width ratio for the fusion zone. The heat flow and fluid flow in the weld pool can significantly influence temperature gradients, cooling rates and solidification structure. In addition, the fluid flow and the convective heat transfer in the weld pool are known to control the penetration and shape of the fusion zone [1]. Generally, laser beam welding involves many variables; laser power, welding speed, defocusing distance and type of shielding gas, any of which may have an important effect on heat flow and fluid flow in the weld pool. This, in turn, will affect penetration depth, shape and final solidification structure of the fusion zone. Both the shape and microstructure of the fusion zone will considerably influence the prop¬erties of the weldment. Many papers [2-4] deal with the shape and solidification structure of the fusion zone of laser beam welds in relation to different laser parameters. However, the effect of all influencing factors of laser welding has not been extensively researched yet. More investigation is required to understand the combined effect of laser parame¬ters on the shape and microstructure of the fusion zone. The present investigation is concerned with laser power, welding speed, defocusing distance and type of shielding gas and their effects on the fusion zone shape and final solidification structure of some austenitic stainless steels.

Experimental procedure Three types of commercial austenitic stainless steels, 03Х18Н11, 03Х17Н14М3 and 08Х18Н14М2Б, were used. The thickness of both 03Х18Н11 and 03Х17Н14М3 steels was 3 mm while that of 08Х18Н14М2Б steel was 5 mm. Both bead-on-plate and autogenous butt weld joints were made by using a carbon dioxide laser a maximum output of 5 kW in the contin¬uous wave mode. Bead-on-plate was made on plates with 3 mm thickness while autogenous butt weld joints were made on plates with 3 and 5 mm thickness. Speci¬mens with machined surfaces were prepared as square butt joints with dimensions of 125 X 150 mm and were fixed firmly to prevent distortion. Combinations of laser power (P) of 2-5 kW and welding speed (S) of 0.5-3 m/min resulted in nominal heat inputs ranging from 0.04 to 0.48 kJ/mm. The defocusing distance was in the range of —5 to 3 mm. Shielding was produced using either argon or helium gas. Laser power effect The effect of heat input as a function of laser power, was clarified using type 03Х18Н11 and type 03Х17Н14М3 steels. Both welding speed and defocusing distance were kept constant at 3 m/min and zero respectively. Complete penetration for the 3 mm base metal was obtained at laser power equal to or greater than 4 kW. The weld bead showed a characteristic of laser welding with depth/width ratio close to 3. No welding cracks or porosity were found in any of the welds, this may be partly due to the good crack resistance of the base metal and the welding conditions provided. The results indicated also that the development of the weld pool is essentially symmetrical about the axis of the laser beam. Yet, lack of symmetry at the root side was observed particularly at higher welding speed with an unsteady fluid flow in the weld pool. This is due to the presence of two strong and opposing forces, namely, the electromag¬netic and the surface tension gradient forces. At these locations, the electromagnetic force may overcome the surface tension force, thereby, influ¬encing convective heat transfer. As a result, any local perturbation in the weld pool can

39

XVII Modern Technique and Technologies 2011 cause the flow field to change dramatically, resulting in the observed lack of local symmetry. Laser power has a less influence on both weld profile and HAZ width in comparison with its effect on penetration depth. This is in agreement with other research works where it is pointed out that the change of laser power between 3 and 5 kW [5] did not result in any significant change in the weld size or shape. It is expected that similar results concerning the dependence of penetration depth on laser power could be obtained in case of type 08Х18Н14М2Б steel due to similarity in both physical and mechanical proper-ties. Welding speed effect The effect of welding speed was investigated at the optimum laser power (4 kW) and zero defocusing distance. The depth/width ratio increased sharply from 2.1 to 4.1 with the increase in welding speed from 0.5 to 3 m/min. The dependence of depth/width ratio on welding speed was confirmed at a different laser power (3 kW). A lower welding speed resulted in a consider¬able increase in the fusion zone size and conse¬quently a decrease in depth/width ratio leading to unacceptable weld profile. Complete penetration with relatively acceptable fusion zone size for the 3 mm base metal thickness was obtained at welding speed of 2 m/min. The fusion zone is symmetrical about the axis of the laser beam. The above results have shown that laser power and welding speed should be optimized in order to minimize heat input, then a satisfactory weld with reliable quality could be obtained. This reflects one of the most notable features of laser welding com¬pared with other welding processes, t. e. small heat input. At high welding speed, attenuation of beam en¬ergy by plasma is less significant. This results in relatively more exposure of the laser beam on the sample surface. Consequently, the depth/width ratio would be increased and the fusion zone size would be minimized. Defocusing distance effect Defocusing distance, focus position, is the dis¬tance between a specimen surface and the optical focal point. In order to study its effect on both penetration depth and weld profile, bead-onplate was made with the change defocusing distance between —5 and 3 mm. Low laser power (2 kW) and high welding speed (3 m/min) were selected to obtain incomplete penetration. In all weld cross sections of type 03Х18Н11 steel using different defocusing distances, no cracking or porosity were observed. The penetration depth is con-siderably decreased with the change defocusing dis-tance from zero to either minus or plus values as a result of decreasing laser beam density.

40

The penetration depth de¬creased from 1.9 to 1.6 mm on changing the defocus¬ing distance from zero to either — 1 or 1 mm. Then, the penetration depth decreased sharply to about 0.2 mm on changing the defocusing distance to more negative ( — 5 mm) or positive (4 mm) values. These results indicated that the most effective range of defocusing distance to get maximum pene-tration with acceptable weld profile lies between zero and — 1 mm. In order to obtain the optimum value, complete penetration butt welds were made using previously obtained optimum laser power (4 kW) and optimum welding speed (3 m/min). The most acceptable weld profile was obtained at defocusing distance of — 0.2 mm for 3 mm thickness where weld bead depth/width ratio is maximum and fusion zone size is minimum with a slight taper configuration. However, the optimum defocusing distance to attain acceptable weld profile for 5 mm thickness was — 0.4 mm. Conclusion (1) The penetration depth increased with the in-crease in laser power. However, laser power has a less effect on the weld profile. (2) Unlike laser power, welding speed has a pro-nounced effect on size and shape of the fusion zone. Increase in welding speed resulted in the increase in weld depth/width ratio and hence the decrease in the fusion zone size. (3) Minimizing heat input and optimizing energy density through optimizing laser power, welding speed, and defocusing distance is of considerable importance for the weld quality in terms of fusion zone size and profile. Helium is more effective than argon as a shielding gas for obtain acceptable weld profile. (4) Fusion zone composition was insensitive to the change in heat input. However, increase in welding speed and/or decrease in laser power resulted in a finer solidification structure due to low heat input. A dominant austenitic structure with no solidification cracking was obtained for all welds. This could be associated with primary ferrite or mixed mode solidi-fication based on Suutala and Lippold diagrams. (5) Mechanical properties, tensile, hardness and bending at room temperature, were not significantly affected by heat input. Referents 1. T. Zacharia, S.A. David, J.M. Vitek and T. Debroy, Metall. Trans. №20 (2000) p. 125. 2. N. Suutala, Metall. Trans. №14 (1998) p. 191. 3. S.A. David and J.M. Vitek, in: Laser in Metallurgy, confer¬ence proceedings of the Metallurgical Society (1989) p. 147. 4. J. Arata, F. Matsuda and S. Katayama, Trans. JWRI №5 (1976) p. 35.

Section III: Technology, Equipment and Machine-Building Production Automation

SURFACE TENSION TRANSFER (STT) E.M, Shamov, A.S. Marin Scientific Supervisor: A.F. Knyazkov, docent. Linguistic advisor: V.N. Demchenko, senior tutor Tomsk Polytechnic University, Russia, Tomsk, Lenin str., 30, 634050 E-mail: [email protected] First of all let’s consider such terms as inverter, AC, Dc, GTAW. Inverter: power source which increases the frequency of the incoming primary power, thus providing for a smaller size machine and improved electrical characteristics for welding, such as faster response time and more control for pulse welding. An inverter is an electrical device that converts direct current (DC) to alternating current (AC); the converted AC can be at any required voltage and frequency with the use of appropriate transformers, switching, and control circuits. Static inverters have no moving parts and are used in a wide range of applications, from small switching power supplies in computers, to large electric utility high-voltage direct current applications that transport bulk power. Inverters are commonly used to supply AC power from DC sources such as solar panels or batteries. The electrical inverter is a high-power electronic oscillator. It is so named because early mechanical AC to DC converters were made to work in reverse, and thus were "inverted", to convert DC to AC. The inverter performs the opposite function of a rectifier. [1] Alternating current (AC): an electric current that reverses its direction at regularly recurring intervals [2] Direct current (DC): an electric current flowing in one direction only and substantially constant in value [2]. Gas tungsten arc welding (GTAW): an arc welding process that uses a non consumable tungsten electrode to produce the weld (also know as tungsten inert gas welding (TIG)). [1] Nowadays it is extremely important to advance and increase all the branches of engineering to develop manufacturing process. That is why welding as an inherent branch of engineering is constantly improving. One should be aware that welding engineering embraces all basic principles of all constructions making. As constructions at present are being developed, intensively methods and equipment for their production must be also advanced. Source current Invertec STT is an example of that equipment. The point to be highlighted is that STT combines inverter high-frequency technology with a highly developed technology management diagram of the welding current. [3] Technology STT is the precise control of welding current and speed of wire feed, significantly minimizing the

amount of smoke, and the perfect formation of the weld. What is more, apparatus STT is characterized by excellent properties for welding root joints, and there are no problems in replace welding by TIG structural and stainless steel. It is widely used in chemical industry, the manufacture of storage equipment, as well as welding of pipelines. For many years, pipe fabricators have been searching for a faster, easier method to make single-sided low hydrogen open root welds. To weld open root pipe is difficult even for skilled welders and inflexible positioning makes pipeline welding more difficult, time consuming and expensive. Higher strength pipe steels are driving a requirement to achieve a low hydrogen weld metal deposit. GTAW has been the only available process capable of achieving the quality requirements but GTAW root welds are very expensive. The GMAW process tends to be rejected because of problems with sidewall fusion and lack of penetration. Lincoln Electric has developed and proven the Surface Tension Transfer (STT) process to make single-sided root welds on the pipe. STT produces a low hydrogen weld deposit and makes it easier to achieve a high quality root weld in all positions. The STT process has a field proven quality record. STT eliminates the lack of penetration and poor sidewall fusion problems encountered when using the traditional short-arc GMAW process. STT has many advantages. Some of them: • penetration control which provides reliable root pass, complete back bead, and ensured sidewall fusion; • low heat input which helps to reduce burn through, cracking, and other weld defects; • cost reduction which uses 100% CO2, the lowest cost gas; • current control independent of wire feed speed which allows the operator to control the heat put into the weld puddle; • flexibility which provides the capability of welding stainless steel, alloys, and mild or high strength steels without compromising weld quality, capable of welding out of position; • speed as high quality open root welds are made at faster travel speeds than GTAW; • easy of operator use; • low hydrogen weld metal deposit that means that hydrogen levels meet the requirements for high strength pipe steels application.

41

XVII Modern Technique and Technologies 2011 Speaking about STT process, one should bear in mind that, a background current between 50 and 100 amps maintains the arc and contributes to base metal heating. After the electrode initially shorts to the weld pool, the current is quickly reduced to ensure a solid short. Then pinch current is applied to squeeze molten metal down into the pool while monitoring the necking of the liquid bridge from electrical signals. When the liquid bridge is about to break, the power source reacts by reducing the current to about 45-50 amps. Immediately following the arc re-establishment, a peak current is applied to produce plasma force pushing down the weld pool to prevent accidental short and to heat the puddle and the joint. Finally, exponential tail-out is adjusted to regulate overall heat input. background current serves as a fine heat control. Thus, the process actually is characterized by the following steps (Figure 1):

A. STT produces a uniform molten ball and maintains it until the “ball” shorts to the puddle. B. When the «ball» shorts to the puddle, the current is reduced to a low level allowing the molten ball to wet into the puddle.

42

C. Automatically, a precision pinch current waveform is applied to the short. During this time, special circuitry determines that the short is about to break and reduces the current to avoid the spatter producing “explosion”. D. STT circuitry senses that the arc is reestablished, and automatically applies peak current, which sets the proper arc length. Following peak current, internal circuitry automatically switches to the background current, which serves as a fine heat control. E. STT circuitry reestablishes the welding arc at a low current level. One more important issue is the application sphere of STT. STT is the process of choice for low heat input welds. Thus, STT is also ideal for: • Open root – pipe and plate. • Stainless steel & other nickel alloys – petrochemical utility and food industry. • Thin gauge material – automotive. • Silicon bronze – automotive. • Galvanized steel- such as furnace ducts. • Semi-automatic and robotic applications. In conclusion, it should be stated that summarizing all the major points, taken into consideration, one mustn’t deny that advantages of STT are quite obvious. The greatest one is its huge sphere of application, which helps to improve welding process. Thus to reduce efforts and to make the speed of producing welds more efficient with help of STT, further research and investigations are required. Referents 1. Dictionary and Thesaurus – MerriamWebster Online.[Электронный ресурс]-режим доступа: http://www.webster.com 2. Lincoln Electric. Waveform control technology. Surface Tension Transfer.[Электронный ресурс]-режим доступа: http://content.lincolnelectric.com/pdfs/products/liter ature/nx220.pdf 3. Welding terms.[Электронный ресурс]режим доступа: http://www.welding.com/welding_terms.

Section IV: Electro Mechanics

Section IV

ELECTRO MECHANICS

43

XVII Modern Technique and Technologies 2011

ENERGY-SAVING TECHNOLOGY FOR TESTING OF TRACTION INDUCTION MOTORS Beierlein E.V., Tyuteva P.V. Tomsk Polytechnic University, 30 Lenin Avenue, Tomsk, Russia 634050 E-mail: [email protected] Electric energy has an added benefit in comparison with other kinds of energy: it could be easily transferred over any distance, it is convenient to distribute among consumers, and it could be transformed into other forms of energy easily and with high efficiency. At present situation as natural resources are restricted and cost of electric energy increases constantly, in front of the science there is a challenge of decreasing power consumption by implementing energy-saving technologies. At present time, there are hundred thousand electrical machines with average power of about 1000 kW in operation applied as traction or auxiliary machines. Many of them have worked out their life time or close to this. For various reasons their replacement rate by new ones is made by insufficient. As a result, failure of electric machines occurs more frequently, and amount of maintenance works and the expenses increase. Every produced induction traction motor is exposed to tests for the purpose of checking the correctness of product and to acknowledge the receipt of motor’s electric and mechanical parameters. Requirements for improved quality and reliability of traction motors are constantly increased. The basic tests are carried out at rated load. Taking into account that traction motor power is comparatively high, with the purpose of the electric power economy, traction motors are loaded by back-to-back method. In this case two traction motors are tested at the same time: the first one works in the motor mode, the second one works in the generator mode. Loss compensation is carried out by one of the known methods. For induction auxiliary motors, such test patterns are not obtained. For induction traction motors, there are no test stations with energysaving technologies. Taking into account the future trends of wide application of induction traction motors and that there are already electric locomotives prototype with induction traction motors, the problem of generation of test stations with energy-saving technology creation is urgent. The basic tests of traction motors are carried out under rated load. From this point of view, the most economic test pattern is the one at which the electrical motor is based on back-to-back load method when two electrical motors are connected electrically and mechanically, that one of them works in the generator mode and gives all produced electric energy to the second electrical motor which works in motor mode and spends all

44

developed mechanical energy for the first electrical motor rotation [1]. Main energy consumed only for losses covering in the circuit. The test pattern of back-to-back method is widely used for testing of high power commutator traction motors. Back-to-back method for testing induction motors with direct connection of their shaft until now were impossible as rotation frequencies of the motor (nm) and the generator (ng) at equal number of poles are different. In the known circuit, connection is made by means of functiongenerating mechanism, and the set rotation frequencies are put into practice by sheave blocks’ diameter selection, which are installed on shafts of the machines under test, or the gear box reduction ratio. At present time, owing to the development in power semiconductor engineering, there are various kinds of semiconductor frequency converters designed and produced serially that has determined wide application of a induction motor variable speed drive, whose basic advantage is regulation smoothness, rigidity of mechanical characteristics and drive efficiency. Supply of induction traction motors on electric locomotives prototype is carried out by static frequency converter. Therefore, the tests using such converters are appropriate. As a result, the schematic circuit for testing induction traction motors in the hourly mode, using two identical motors as shown in fig. 1 is proposed. In the given schematic circuit, electric machines are connected both electrically and mechanically [2, 3]. The abbreviations used in fig. 1 are as described follows: M1, M2 – two identical induction traction motors; FC – frequency converter; K1-K4 – contactors.

Fig. 1. The schematic circuit of induction traction motors testing It is possible to supply each induction traction motor from the frequency converter via the combination of contactors К1 - К4. At the same

Section IV: Electro Mechanics time, one induction traction motor works as the motor, and the second one - in a generator mode and vice versa. In order to lead the induction machine into a generator mode, it is necessary to set rotation frequency above synchronous speed, in other words to secure negative slip in comparison with main supply frequency. Therefore the tested motor should be supplied from frequency converter with frequency above main supply. As in the test pattern, the motor under test and the loading generator are connected mechanically by joint box. So in this case, rotation frequency of a motor rotor and a generator rotor are equals, the following equation can be written: 60 ⋅ f g 60 ⋅ f m (1 − sm ) = (1 − s g ) . (1) p p Using (1), one can obtain the supply frequency of the motor from the frequency converter: 1 − sg fm = f g × (2) 1 − sm The motor that works in generator mode should rotate by means of the primary motor in a direction of a rotating stator field, but with a speed n2>n1, at that rotor rotation with regard to a stator field is changed (in comparison with a motor mode of this machine) as the rotor will be overtaken stator field. As the motor and the generator work jointly during tests, they are connected mechanically and electrically. Hence, their power diagrams represent sequential combination of the machines’ power diagrams that work in motor and generator modes, as shown in fig. 2.

∆pm1

P1

Pelm

Pmech

∆pst1

∆pm 2

P2 ∆pst 2

∆pm1

Pelm 2∆pmech

∆pm 2 P2 g

∆pst1

∆pst 2

Fig. 2. The power diagram system the induction motor - the induction generator For definition of the electric power economy, let’s obtain the test circuit economy factor Ke. It is computed from the relation of the active powers consumed by electric machine under test and the test circuit as a whole difference to consumed active power by the electric machine tested, which is written as follows: P −P K e = 1 tс , (3) P1 where – the active power consumed by electric machine under test; – the power consumed by the test circuit as a whole. The expression for the test circuit economy factor at back-to-back method:

 1 − η m ⋅η g  − P2   ηm  ηm  = η ⋅η Ke = m g P2 P2

(4)

ηm As can be seen from the expression, the test circuit economy factor depends on the efficiencies of the machines under test. The power losses of both machines are covered in the test station due to a main supply source. Let's now lead the comparative analysis of test patterns on induction traction motor example NTA1200, whose rated parameters are given in table 1. The most widely used induction traction motor test pattern is a pumpback method without the supply matching. In public corporation, VelNII for induction traction motor has used the pumpback test method with a power losses covered from direct current motor [2].

Table 1. General properties

Rated power, kW Line voltage, V Phase current, А Rated frequency, Hz Rated rotating frequency, rpm Maximal rotating frequency, rpm Driving torque, kNm Efficiency, % Power factor, r.u.

Operating regime short time continuous duty running duty 1200 1170 2183 385 376 65,4 1295 2680 8,853 95,7

8,629 95,8 0,861

Test circuit economy factor of this test pattern: 1 − η1 ⋅η 2 ⋅η3 ⋅η 4 Ke = 1 − = (0, 75 ÷ 0, 65) , η 4 ⋅ η5 where η1 ,η 2 ,η3 ,η 4 ,η5 – efficiencies of the electric machines, which are a part of the test circuit pumpback method without the supply matching. Power economy under the pumpback test method: P 1200 Pe = 2 r ⋅ K e = ⋅ 0, 7 = 877, 74 kW . ηH 0,957 Power consumption for power losses covered in the test pattern: P 1200 Plos = 2 r − Pe = − 877, 74 = 376,18 kW . 0,957 ηr The consumed electric power for the pumpback test method: W = Plos × t = 376,18 ⋅1 = 376,18 kW ⋅ h , where t – test time according to State Standard 2582-81 and it is equal to 1 hour. The consumed electric power cost for the pumpback test method is given by: C1 = W ⋅ Cee = 376,18 ⋅ 0, 09 = 33,86 EU , where – the electric power cost for 1 kW·h [4].

45

XVII Modern Technique and Technologies 2011 The pumpback test method for induction traction motors without supply matching contains many auxiliary machines that results additional energy transformations. Taking into account the weakness of the pumpback method, such as too many auxiliary machines that lead to increase in the test station area, meshing of the control circuit and increase in the amount of energy transformations, the back-to-back test method has been offered. The back-to-back test method has allowed minimizing weaknesses of the pumpback test method. Economy factor of the back-to-back test method: K e = η m ⋅η g = (0,94 ÷ 0,88) . Power economy under the back-to-back test method: P 1200 Pe = 2 r ⋅ K r = ⋅ 0,9 = 1065,83 kW . 0,957 ηr Power consumption for power losses covered in the back-to-back test method: P 1200 Plos = 2 r − Pe = − 1065,83 = 188, 09 kW . ηr 0,957 The consumed electric energy for the back-toback test method: W = Plos × t = 188, 09 ⋅1 = 188, 09 kW ⋅ h . The consumed electric energy cost for the back-to-back test method: C2 = W ⋅ Cee = 188, 09 ⋅ 0, 09 = 16,93 EU . The consumed electric power cost economy with the back-to-back method in comparison with pumpback method with power losses covered from direct current motor used for induction traction motors testing is given by: Ee = C1 − C2 = 33,86 − 16,93 = 16,93 EU . The percentage of consumed electric power cost economy with the back-to-back method in comparison with pumpback method with power losses covered from direct current motor used for induction traction motors testing: C − C2 33,86 − 16,93 E% = 1 = ⋅100 % = 50 % . 33,86 C1 Since the back-to-back test method is used at the same time, two induction traction motors are tested and the cost will be evenly distributed. The consumed electric power cost for each motor: C 16, 93 C2 sp = 2 = = 8, 47 EU , 2 m where m –amount of simultaneously tested induction traction motors. Specific consumed electric power cost economy when the back-to-back method in comparison with pumpback method with a power losses covering from direct current motor used for induction traction motors testing: Eesp = C1 − C2 sp = 33,86 − 8, 47 = 25, 39 EU . The percentage of specific consumed electric power cost economy with the back-to-back method in comparison with pumpback method with power

46

losses covered from direct current motor used for induction traction motors testing: C1 − C2 sp 33,86 − 8, 47 E% sp = = ⋅100 % = 75 % . C1 33,86 As a result, the energy saving in testing of average and high power induction motors, for example induction traction motors, could be achieved by usage of the back-to-back method and the realized saving could be about 75 % from consumed energy for the testing. For definition of annual electric power saving we shall take an advantage of tested amount of direct current traction motors. At the average test station in the course of year about 1000 direct current traction motors are tested, propose that at transition on induction traction motors amount of machines were tested will remain at the same level. The annual consumed electric power economy by a back-to-back test method will make 25390 EU. In conclusion, it is significant to note that the proposed test pattern under the back-to-back method for testing of high power induction traction motors achieves savings in the electric power. The electric power economy in the given test pattern depends on the efficiencies of the machines under test. The comparative economic analysis shows that the consumed electric power economy makes about 50 % at the application with the back-to-back method in comparison with pumpback method. It has been found that the specific electric power economy on one induction traction motor is about 75 % (when the back-to-back test method is used for testing two induction traction motors at the same time. Furthermore, the usage of the back-to-back test method allows not only to save electric energy during testing, but also to reduce the test station area and to reduce the amount of man-hours for one motor testing. REFERENCES 1. Zherve, G.K. Industrial tests of electric machines. - Leningrad: publishing house. Energoatomizdat, 1984. – 506 p. 2. Beierlein, E.V., Cukublin, A.B., Rapoport, O.L. The test circuit of variable speed traction induction motors // News of institution of higher education. Electromechanics. – 2006. – № 3. – pp. 46-48. 3. Beierlein, E.V., Cukublin, A.B., Rapoport, O.L. The Device for traction motors testing // Useful model patent. № 80018 from January, 29th, 2009. 4. Electricity prices by type of user - Euro per kWh. Industrial consumers // EUROSTAT. 4/30/2009. http://epp.eurostat.ec.europa.eu/tgm/table.do?tab= table&init=1&plugin=0&language=en&pcode=tsier0 40

Section V: The Use of Modern Technical and Information Means in Health Services

Section V

THE USE OF MODERN TECHNICAL AND INFORMATION MEANS IN HEALTH SERVICES

47

XVII Modern Technique and Technologies 2011

DEVICE FOR THE DESTRUCTION OF CONCREMENTS IN THE HUMAN BODY Khokhlova L.A, L.Yu Ivanova, Scientific supervisor: Ivanova L.Yu, Tomsk Polytechnic University, 30, Lenin ave., Tomsk, 634050, Russia E-mail: [email protected] Abstract. Nowadays, problem of generation of organicmineral concrements in the human body is relevant. It covers such areas of medicine, as urology, cardiology, orthopedics, gastroenterology, and others. In this paper, electropulsing method of destruction pathological formation in the human body is considered. Specifications and design parameters of electric pulse contact lithotriptor , its efficiency and competitive advantage in this class of devices are presented. Introduction. Generation of organic-mineral concrements in the human body is a form of metabolic disorders, which would tend to increase due to changes in the diet and increasing environmental hazards, providing direct impact on the human body. The problem is urgent due to the fact that in 65-70% of cases the disease is diagnosed in people aged 2060 years, i.e in the most active period of their working life [1]. According to the statistics of USA , the incidence of urolithiasis now reaches 5.3%, and coronary heart disease - more than 60% [2]. Currently, all over the world invasive ways are intensively developing minimally to break stones and vascular plaques, such as angioplasty, and the use of shock waves (lithotripsy) to solve these problems. The lithotripsy is widespread in urology, however, today we study the application of shock waves in cardiology. Nowadays, lithotripsy is represented by two main directions, extracorporal shock-wave lithotripsy (ESWL) and contact lithotripsy. ESWL is the most popular method in urology, an indisputable advantage of this method is the absence of direct invasion into the patient body. However, ESWL has some disadvantages, such as repeated sessions of lithotripsy, the need for accurate focusing, necessity for additional radiological control, risk of injury of the surrounding tissues [3]. Contact lithotripsy is based on the transfer of energy to the stone through the probe, introduced through the endoscope into. Its main advantages are direct energy transfer to the stone, immediate control over the process, possibility of destruction of stone fragments. In contact lithotripsy, ultrasound, laser, pneumatic and electrohydraulic lithoclast, are popular. In the early 2000's, Tomsk’s scientists V. Chernenko, V. Diamant, M. Lerner and other [4] developed another method of endoscopic

48

lithotripsy - electropulsing lithotripsy. It allowed to use advantages of traditional methods of destruction of stones, electrohydraulic (rapid destruction of the stone), laser (long thin flexible probe), and dispose of their disadvantages such as a high risk of perforation of the ureter for electrohydraulic and long-term destruction of the stone, thermal tissue damage of laser method. Device description. The device is based on the electricpulse method for the destruction of solid objects in a liquid medium. Destruction of objects is the result of creating in solid (object destruction) electrical breakdown by short voltage pulses. The principle is based on effect of Vorobiev’s. According to this effect, the strength of liquid dielectrics is growing faster than for solid dielectrics, when the exposure time of impulse voltage decreased, and there is inversion of the ratio of electrical resistance media. The static voltage electric strength of solid insulators exceeds the strength of liquid dielectrics. When exposed to pulsed voltage with an exposure less than a microsecond electric strength of dielectric fluids increases and becomes higher than the strength of solid dielectrics. The operating principle of this effect clearly demonstrates by curves in Fig. 1. This graph conventionally shows the breakdown voltage of the time its effects (voltssecond performance) on solid and liquid dielectrics in the breakdown n voltage pulse on an oblique. The point of intersection of these characteristics determines critical slope of voltage increase on the leading edge of the Ac. It also shows the voltage at the edge of the pulse before and after the breakdown of solid dielectric.

Figure 1. Chart describing the "effect of Vorobiev’s." U (t)-pulse the applied voltage, Udis -

Section V: The Use of Modern Technical and Information Means in Health Services voltage at which the breakdown of solid dielectric, Ud (t) - voltage across the insulator in the process discharge him, 1 and 2 - volt-second characteristics, respectively solid and liquid dielectrics, Ac - a critical slope voltage on the edge of the pulse voltage, above which is manifested "Effect of the Vorobiev’s". If the pin electrode is applied a voltage pulse with rising edge with a small slope, its causes discharge in the liquid on the surface of solid dielectric, and at large slope – discharge is being introduced into the solid dielectric and chip part of its surface. For large steepness of the displacement current due to the motion of the plasma surface discharge passing through the protrusion on the electrode in contact, causing it to explode, the formation of metal plasma jet, which is embedded in solid dielectric, and leads to a discharge within it (Fig.2)

Figure 2. The principle of electric pulse destruction based on " effect of Vorobiev’s", A and C- pin shape anode and cathode, 2-solid insulator which is in a liquid dielectric medium, dis discharge channel, 3- spalling on the surface of solid dielectric. Considering these features, parameters of output pulses of the device were chosen. The steepness of the voltage pulse is most important. In addition, rapid release of energy in the discharge channel is needed for microexplosion solid in the gap between the electrodes. Besides, in medicine, there are also limitations. The direct transfer of energy to the stone requires to take into account the anatomical features of urinary tract. First, it is a diameter of the tool (probe), interposed through the endoscope into the natural passageways of the genitourinary system. It must not exceed 1,5 mm. Breakdown of the electrolyte in the gaps 0,1-1 mm, is performed by discharge low-inductance capacitor which is injected into the body segment by means of low impedance cable [6]. Therefore, high-voltage generator of nanosecond pulses is required. The generator is based on thyratron with a hollow cathode, used as

inertialess relay for energy storage, which is a set of high-voltage ceramic capacitors. As a result, the relay circuit generates a pulse with the required parameters. The research results. For several years, Tomsk’s scientists have studied the processes of organic concretions destruction [7]. According to the experiments, effective and safe options for destruction are: - pulse energy is 0,1-1,0 J; - amplitude of the voltage pulse is 3-10kV; - wavefront is 20ns; - current pulse amplitude is 150-500A; - current pulse is 500-700ns; - generation of single pulses and pulses with a frequency of 1 to 5 Hz. In addition, the total energy, expended in the destruction of a stone, considerably smaller than energy value of the most popular ESLW and laser lithoclast. This factor increases security operations and reduces the risk of postoperative complications. Conclusion. The main advantages of the device are the availability of flexible probes of various diameters (from 0,9 to 1,5 mm), small total energy, expended on the destruction of stones, low traumatism of the surrounding tissues, small mass and size parameters. The device is not inferior to foreign models, and in some respects even overcomes them. According to experimental data, perspective direction is to use an electricpulse method for the destruction of atherosclerotic plaque in cardiology. However, we must take into account the peculiarities of the cardiovascular system, while designing and developing device and the probes, such as probe diameter, value of the input energy, combination with tools for angioplasty and other. References 1. Apolihin O.I, Sivkov AV Gushin BL Prospects for the development of modern urology / / Proceedings of the IX All-Russian Congressof urologist.- Moscow, 1997.-p.181-200 2. Afonin V.Ya., Gudkov A.V., Boschenko V.S., Arseniev A.V. Efficacy and safety of endoscopic contact electropulsing lithotripsy in patients with urolithiasis. Siberian Medical Journal, 2009, Volume 24, № 1. - p.117-123 3. Akberov R.F, Bobrowski I.A. Experience with remote lithotripsy in the treatment of patients with urolithiasis on the unit «TRIPNER XI DIREX Ltd» Kazan Medical Journal, 2002, Volume 83, № 2 p.99-101 4. Lerner M., Chernenko, V.P., Anenkova L.Yu., Dutov A.V. Use of electric impulse discharge in medicine / / Proceedings of the International scientific conference devoted to 100 anniversary of

49

XVII Modern Technique and Technologies 2011 the birth of Professor A.A Vorobiev. – Vol. 2 / TPU, 2009. - S.283-288. 5. Mesyats G.A. About the nature of "the effect of Vorobiev’s" in physics of pulsed breakdown in solid dielectrics. Technical Physics Journal Letters, 2005, vol 31, № 24 p.51-59 6. Chernenko V., Diamant V., Lerner M. and other, Method and intracocorporal lithotripsy

fragmentation and apparatus for its implementation, USA patent № 7,087,061/B2. 7. Lopatin V.V., Lerner M.I., Burkin V., Chernenko V.P. Electric discharge destruction of biological concretions "Izv. Physics, 2007. - № 9. Application. - p. 181-184

NEW TECHNOLOGIES IN MEDICINE: THERMAL IMAGING Belik D.A., Mal'tseva N.A. Scientific Advisor: Aristov A.A , Associate Professor, Ph.D Linguistic Advisor: Falaleeva M.V. Tomsk Polytechnic University, 634050, Russia, Tomsk, Lenin Avenue, 30 E-mail: [email protected] The concept of thermal imaging in general and the characteristic of some types of thermal imagers are described in this article. The purpose is to study this type of diagnosis in medicine as painless and harmless form of a full examination of all human organs. Nowadays, thermal imaging attracts great interest of medical community. Finding the perfect method of diagnosis inevitably leads to thermal imaging, which most closely combines the visualization of pathology, is absolutely harmless to patient and medical personnel. It is also characterized by speed and ease of obtaining information, technical and economic accessibility. For the first time thermal diagnostics was applied in clinical practice by Canadian surgeon Dr. Lawson in 1956. He practiced on the night vision device used for military purposes, for the early diagnosis of women malignant mammary tumors. The use of thermal imaging method showed encouraging results. Reliability of determination of breast cancer was, especially at an early stage, approximately 60-70% accuracy. Identification groups of risk at mass screening justified efficiency of thermal imaging. Obviously, In the future thermal imaging will be increasingly used in medicine. With the development of thermal imaging technology it is possible to use thermal imaging in neurosurgery, internal medicine, surgery reflex diagnostics and reflexology. Interest to medical thermal imaging is growing in all developed countries. Thermal imaging is a universal way of getting various information about the world around us. It is known that anybody has thermal radiation and its temperature is different from absolute zero. In addition, the vast majority of energy conversion processes (and this includes all the known processes) occurs with the release or absorption of

50

heat. Since the average temperature on Earth is not high, most of the processes take place with low specific heat release and at low temperatures. Accordingly, the maximum radiation energy of such processes falls into the infrared range of the microwave. Infrared radiation is invisible to the human eye but can be detected by different detectors of thermal radiation, and in some way transformed into a visible image Thermal imaging - a scientific and technical direction, investigates physical principles, methods and instruments (imagers), providing the possibility of observing objects heated slightly. Thermal imaging device.

Infrared radiation has low energy and it is invisible to human eyes, so special devices were created to study it - thermal imaging (thermography)- which allow capture this radiation, measure it and turn it into something visible to the eye. Thermal imagers are opto-electronic devices. This unseen by the human eye light is converted into an electric signal, which is subject to enhancement and automatic processing, and then converted into a visible image of the thermal field of the object for its visual and quantitative assessment. The first thermal imaging systems were established in the late 30th of the 20th century and partially used during the 2-nd World War to detect military and industrial facilities. The application of methods of thermal imaging

Section V: The Use of Modern Technical and Information Means in Health Services Thermal imaging is used in many spheres of human activity. For example, thermal imagers are used for military intelligence and security facilities. Objects of conventional military equipment are visible at a distance of 2-3 km. Today, a video of the microwave with image on a computer screen has sensitivity in a few hundred degrees. This means that if you open the front door, your thermal imprint is visible on the handle for half an hour. Even at home with the lights out, you'll shine even behind the curtain. In the metro you can easily distinguish the people who have just entered. And the presence of the common cold in humans and whether he had anything interesting can be seen at a distance of several hundred meters. It is useful to use thermal imaging to locate defects in different settings. Naturally, when any installation or site observed increases or decreases in heat for some process in place where it should not be, or heat in these sites varies greatly, the problem can be corrected in a timely manner. Sometimes, certain defects can be seen only by the thermal imager. For example, bridges and heavy supporting structures during the aging metal or off-design strain begin to stand out more energy than it should. It becomes possible to diagnose the state of an object without disturbing its integrity, although there may be difficulties associated with not very high accuracy caused by intermediate structures. Thus, the imager can be used as an operational and, perhaps, the only controller security status of many objects and to prevent catastrophe. Check operation of flues, ventilation, heat and mass transfer, atmospheric phenomena become orders of magnitude easier, simpler, and more informative. Thermal imaging has found wide application in medicine. The use of thermal imaging in medicine. In modern medicine, a thermal imaging survey is a powerful diagnostic method to detect such pathologies which are difficult to control in other ways. Thermal imaging research is used to diagnose the early stages (before radiographic manifestations, and in some cases long before the complaints of the patient) the following diseases: inflammation and swelling of the mammary glands, organs of gynecological sphere, skin, lymph nodes, ENT disease, nerve damage and limb vessels, varicose veins, inflammatory disease of the gastrointestinal tract, liver, kidneys, osteochondrosis and spinal tumors. Absolutely harmless device, imager, is effectively used in obstetrics and pediatrics. In a healthy body the temperature distribution is symmetrical about the midline of the body. Violation of this symmetry is the basic criterion for thermal imaging diagnostics. On parts of the body with abnormally high or low temperatures the symptoms of more than 150 diseases in the earliest stages of their occurrence can be recognized. Thermography - a method of

functional diagnostics, based on the detection of infrared radiation of the human body, is proportional to its temperature. Distribution and intensity of thermal radiation in the norm is the definite feature of physiological processes occurring in the body. There are two main types of thermography: a. Contact cholesteric thermography. b. Teletermografy. Teletermografy is based on the conversion of infrared radiation of the human body into an electrical signal that is visualized on the screen image. Contact cholesteric thermography is based on the optical properties of cholesteric liquid crystals, which show a change of color in the rainbow colors when applying them to heat emission surface. Cold areas correspond to red, the coldest to hot-blue. After considering various methods of thermal imaging we need to know how to interpreted thermo graphic images. There are visual and quantitative ways to evaluate thermal picture. Visual (qualitative) assessment of thermography means to determine the location, size, shape and structure of foci of enhanced emission, as well as roughly estimate the magnitude of infrared radiation. However, the visual assessment can not accurately measure the temperature. Moreover, the rise of the apparent temperature in the thermograph is dependent on the scan rate and magnitude of the field. Difficulties for the clinical evaluation of thermography are that the temperature rises in a small area of the site is hardly noticeable. As a result, small in size pathological focus cannot be detected. Radiometric approach is very promising. It involves using the latest technology and can be used to conduct mass preventive examination, obtaining quantitative information on pathological processes in the studied areas, as well as to evaluate the effectiveness of thermography. Conclusion. And in conclusion we would like to say that the topic of thermal imaging is a very hot topic today because there are few methods of investigation, which would be carried out without interference and the negative impaction on our body. Thermal imaging can be called a universal way of receiving information about the world. In modern medicine, a thermal imaging survey is a powerful diagnostic method to detect such diseases that are poorly controlled by other ways. Thermal imaging survey is used to • diagnosis various diseases in early stages, and in some cases long before complaints of the patient. We are convinced that the thermal • survey will be widely used in medicine and will be actively implemented in medical institutions in our country.

51

XVII Modern Technique and Technologies 2011 References. 1. Vavilov V. (2009) Infrared thermography and thermal control. 2. 2.VavilovV (2006).Thermal control, "Nondestructive testing" under the total. Ed. Klyuev V. 3. Vavilov V, Klimov A. (2002). Imagers and their application. MM:Intel wagon 4. Gossorg J. (1988) Infrared thermography. Fundamentals, techniques, applications.Moscow: Mir. 5. Bazhanov, S (2000) Infrared diagnostics of electrical switchgear. Library of Electrical

Engineering, Applications. Journ. "Energetic", M.: NTF Energoprogress "," Energy ". 6. Vavilov V, Aleksandrov A (2003) Thermal Diagnostics in the energy sector. 7. Library of Electrical Engineering, Applications. Journ. "Energy", M.: NTF Energoprogress " 8. "Energetic". 9. Theological V (2006) Building Thermal Physics. - St.: Publishing ABOK North-West. 10. 8. Lykov A (1967) The theory of heat conduction. Moscow: Higher School.

THE DETECTING UNIT BASED ON SOLID-STATE GALLIUM ARSENIDE DETECTORS FOR X-RAY MEDICAL DIAGNOSTIC Sakharina Y.V.1, Korobova А.А.1, Nam I.F.2 1 Tomsk Polytechnic University, 634050, Russia, Tomsk, Lenina av., 30 2 Siberian Physical-Technical Institute, 634050 Russia, Tomsk, Lenina av., 36 E-mail: [email protected] Abstract. In the paper, development results of the detecting unit based on microstrip GaAs detectors for scanning x-ray systems arepresented. The detection unit has the module design.

detectors irradiated on the side in an "edge-on'' geometry.

Index Terms —x-ray, detection unit, gallium arsenide detectors I. Introduction Medical X-ray Imaging has seen considerable improvements in recent years with the advent of digital radiology, based on direct detection systems such as flat-panel detectors, or indirect systems such as computed radiology image plates [1]. These new systems use solid-state detectors instead of film as radiation sensors, and perform better than standard film-cassette systems in terms of image contrast and resolution, with all the advantages of digital imaging processing such as storage, and an additional reduction of dose delivered to the patient [2]. In this paper, we present a simple prototype of the detecting unit for two-dimensional digital X-ray imaging using the GaAs microstrip detectors with readout electronics for medical application. II. SYSTEM DESCRIPTION Block diagram and external view of the detecting unit based on gallium arsenide detectors for scanning x-ray systems are presented in Figure 1 and Figure 2 correspondently. In order to obtain a high efficiency and high spatial resolution device we are using GaAs strip

52

Figure 1. Block diagram of the detecting unit In solid-state detectors, the charge produced by the photon interactions is collected directly. The collected charge is converted to a voltage pulse by a preamplifier.

Figure 2. The external view of the detecting unit: Supply units - power supply unit, DU detecting units, IU - Interface Unit.

Section V: The Use of Modern Technical and Information Means in Health Services Due to interaction of x-ray with gallium arsenide detectors (1) the electron-hole pairs are generated. This pairs under action of electric field move to electrodes and induce current impulse on electrodes of microstrip detector. This impulse is transmitted to the input of multichannel read-out electronics (2) which operates in single photoncounting mode as well as in the integrating mode. Under action of operating impulses from the interface unit (3) the signal from each element of the detector get to the analog-to-digital converter (ADC), after digitization the data transmit to the data processing device (4) for example computer. In the data processing device it is possible to receive any kind of representation of the information received from detectors, including the visualization as an image, one axis of this image coincides with the scanning direction, and the second axis is formed by the line of detectors. Microstrip detectors with the integrated chip combine into one module. The advantages of the modular structure are: • Scale simplicity. I.e. change of quantity of modules should not demand development of new modules. • Change of one parameter of the detection unit (the size of the detector, type of the primary converter, type of the interface for data input in the personal computer, etc.) should not entail change of circuitry more than one module The control board in IU operates the functioning of the modules. Modular design also means that modules can be easily scaled to fit the application‚ when required. The control board also converts analog signals from microstrip detectors to digital data and transmits it to the device for data processing and visualization. The main element of this board is a special reprogrammed chip. It allows using this unit not only in mammography but also in non destroying testing systems and for custom control. The only thing is needed to change the modules. III. RESULTS Spatial resolution. The spatial resolution is measured by using L659061 test pattern. Three x-ray images of tis pattern are shown in Figure3.

a) the L659061 test pattern is parallel b) the L659061 test pattern is perpendicular and c) the displacement angle L659061 test pattern is equal 45 ° with respect to direction of DU motion. Preliminary estimation of images showed spatial resolution of 8 line pairs per mm. As a result, we can visualize objects with 50 µm in size as illustrated in figure 4.

Figure 4. X-ray imaging of set of steel wire (Fe) d=63 and 50 mm correspondently Contrast. Contrast sensitivity was measured using a special contrast - detail pattern. It can be visualized aluminum disk with the thickness of 100 microns on the background of the aluminum plate with the thikness 1,5 mm in the x-ray image of contrast-detail pattern (figure 5.)

Figure 5. X-ray image of detail-contrast pattern for contrast sensitivity determination. Dynamic range. Images were obtained aluminum wedge (Figure 6). According to the results the dynamic range of the detecting unit is at least 500.

Figure 6. X-ray image of aluminum wedge.

a)

b)

c) Figure 3. X-ray images of L659061 test pattern:

IV. CONCLUSION Currently, digital detectors are develops in the world. This is the most effective method for detecting X-rays. Even slight improvements in the presentation of information and efficiency play an important role for consumers. Therefore, the proposed technology, which reduces the radiation dose to the patient, firstly will reduce the fear of the population before the X-rays and increase the flow of the surveyed patients, secondly the price will

53

XVII Modern Technique and Technologies 2011 make it possible to buy digital x-ray systems for many clinics, used X-ray film. This work is supported by: АВЦП 2.1.2/12752; РФФИ 09-02-99028-р_офи; ФЦП «Кадры», ГК №02.740.11.0164 от 25июня 2009

REFERENCES [1] P. Rato Mendes, Topics on ionising radiation physics and detectors, course at the Fourth Southern European EPS School ‘‘Physics in Medicine’’, Faro, September, 2001. [2] M. Overdick, Flat X-ray detectors for medical imaging, keynote lecture at the Fourth International Workshop on Radiation Imaging Detectors, Amsterdam, September, 2002, Nucl. Instr. and Meth. A (2003), these proceedings.

ELECTROCARDIOGRAPH WITH NANOELECTRODES FOR INDIVIDUAL APPLICATION N.S. Starikova, M.A. Yuzhakova, P.G. Penkov Research supervisor: D.K. Avdeeva, DSc Tomsk Polytechnic University, 634050, Russia, Tomsk, 30, Lenin Ave. E-mail: [email protected] Heart diseases are a major public health problem that takes the lives of a great number of people in Russia each year, more than lung cancer, breast cancer, stroke, and AIDS combined. Deaths from heart diseases can be lowered by electrocardiography. ECG (Electrocardiograph) is a useful tool for detecting symptoms of heart diseases. [1]. Depolarisation initiated at the SA node spreads as a wavefront across the two atria, and also at a higher speed along the three inter-nodal tracts to the AV node. There is a delay at the AV node, then the wavefront travels at high speed down the His bundle in the interventricular septum, dividing the two ventricles. The His bundle branches into the Purkinje system which conducts the wavefronts along much of the endocardial surfaces of the two ventricles. The wavefronts then spread more slowly through the normal myocardium. They spread from the inside to the outside of the ventricles. Wavefronts travel with higher velocity in the direction of the fibre orientation. The morphology of the resulting ECG recorded on the chest surface depends on the orientation of the heart, the active recording electrode and the reference electrode. The signal waveform produced for each heart beat consists of the P wave, due to atrial depolarisation, the QRS complex due to ventricular depolarisation, and the T wave caused by ventricular repolarisation. The effects of summing all the electrical activity in the heart can be represented by an electrical dipole whose magnitude and direction is constantly changing. The scalar magnitude of the ECG is then the dot product of the dipole and the electrode orientation. Generally 12 lead positions are commonly used to record the ECG. The first three are known as

54

leads I, II and III (Fig.1). They are left arm to right arm for I, i.e. the active lead is on the left arm (usually the left wrist) and the reference electrode is on the right arm (wrist). Lead II is left leg (ankle) to right arm, and lead III is left leg to left arm. The right ankle is usually grounded. The lead vectors can be represented by an equilateral triangle, known as Eindhoven's triangle. The direction of lead I vector is 0 degrees by convention. The direction of lead II is 60 degrees and lead III is 120 degrees. The remaining lead positions use a common reference, known as Wilson's central terminal. This consists of the right and left wrists joined to the left

Figure 1. Commonly used lead positions ankle, each through a suitably large resistor. The first three active electrodes are the right and left wrists and the left ankle. In practice this means that when one limb is an active electrode, it is shunted by the resistance that is part of the

Section V: The Use of Modern Technical and Information Means in Health Services Wilson's central terminal circuit. To avoid this shunting, the active limb is connected by a resistor of half the value of the others, to the non-inverting input of the amplifier. The limb is not connected to the Wilson's central terminal. This is known as the augmented lead system. The leads are called aVL, aVR and aVF for active lead connections to the left wrist, right wrist and left ankle respectively. The other six leads also use the Wilson's central terminal as reference and the active lead is placed in different positions over the front of the chest overlying the base of the heart and the apex. Electrocardiograph recorders require a number of features to make them usable in clinical practice. These are: 1. Protection circuitry against defibrillation shocks that may be given to the patient. These shocks may be up to 3,000 volts. 2. Lead selector. The default mode is to automatically record all 12 leads simultaneously. Otherwise one or more leads are selected for recording. In cheaper machines, three or four leads are recorded simultaneously for five seconds at a time, with automatic switching to each group of three or four leads. 3. Calibration signal of 1 mV is automatically applied to each channel for a brief period. 4. Preamplifiers have a very high input impedance and high common mode rejection ratio (reject signals appearing on both the active and reference leads simultaneously). 5. Isolation circuit separates the patient from the power supply. 6. Driven right leg circuit. In older instruments the right leg was grounded. Now, as part of the isolation, the right leg is not connected to ground, but is instead driven by an amplifier to remain at a virtual ground. 7. Driver amplifier follows the pre-amplifier and drives the chart recorder. It also filters the signal to remove any dc offset and high frequency noise. 8. Microprocessor system contains circuitry for digitizing the signal, and storing and analyzing it. Most systems can automatically calculate the rate, analyze most of the common arrhythmias, report the axes of some features, and detect old and recent myocardial infarcts (heart attacks, coronary occlusions). 9. Recorder printer is used to provide a hard copy of the ECG, together with the patient information and the analysis and diagnosis. Three lead positions are going to be used in an ECG with nanoelectrodes (Fig.2). An ECG with nanoelectrodes is to be an advance among traditional electrocardiograph units. Its compact size and lightness will provide portability, making it possible to measure and record cardiac dysrhythmia anywhere and any time [2]. These small units will allow patients to identify cardiac irregularities at the early stages and will be very useful for examining patients at home. A compact electrocardiograph must always be ready to be

transported to wherever needed and has to be able to perform reliable data recording. Thus, small size, minimal weight, and extended battery life are essential. Modern nanotechnologies and nanomaterials open new perspectives for a new generation of medical electrodes – nanoelectrodes, which have higher stability of electrode potentials, stable contact and polarized potentials, lower quantity of noise and resistance. Superior metrological characteristics of nanoelectrodes allow us to create new medical electrocardiographic equipment for domestic application that operates in a wide frequency range and make it possible to monitor bioelectric activity of human organs in a nanovolt and microvolt range. Therefore, the early heart disease detection will be possible. Up-to-date electrocardiography is to provide: - accessibility - health monitoring - patient’s health monitoring during his whole life. Electrocardiograph with nanoelectrodes for domestic application is going to have the following parameters: 1. High resolution. The quality of electrodiagnostic equipment depends on the quality of electrodes used for picking-up the bioelectrical activity of human organs and tissues.

Figure 2. Nanoelectrodes for ECG with limb 2. Diagnostic significance due to the electrocardiograph noise decrease and pickup of high quality of the bioelectrical activity of human organs and tissues by nanoelecrtodes. These advantages lead to: high competitiveness of home-produced ECG; high quality of electrocardiographic monitoring due to the diagnostics of cardiovascular pathologies at the early stages; lethality rate reduction; lifetime extension. 3. Hardware components and ECG equipment cost will be reduced due to the ECG circuit simplification. Nowadays, the heart disease problems and poor diagnostics cause the people mortality at the

55

XVII Modern Technique and Technologies 2011 age of 40-50, generally male. Episodic heart monitoring cannot detect primary factors of preinfarction angina. To solve this problem, it is necessary to create a computerized multifunctional electrocardiograph for individual application that is convenient in operation. The device should be affordable for ordinary people to monitor the heart state without having to leave their home. ECG with nanoelectrodes should be meant for constant heart monitoring with saving these data in an individual database. It will also have some software for self-diagnostics.

ECG with nanoelectrodes is to become an indispensable device for millions of people. These ECGs with nanoelectrodes will be very convenient and allow patients to monitor their current heart state without having to leave their home thus, reducing the risks of heart disease. References: 1. RASFD, retrieved February 21, 2011 from http://www.rasfd.com/ 2. Medical Practice (Family Medicine): Future Approaches , retrieved February 21, 2011 from http://giduv.com/journal/2004/2/obschaja_vrachebn aja_praktika

APPLICATION OF BIOFEEDBACK FOR TRAINING FOR MONITORING THE STATUS OF PREGNANT MOTHER-FETUS Timkina K. V.1, Khlopova A.A.2, Kiselyova E.Yu.1, Tolmachev I.V.2 The scientific adviser: Tolmachev I.V., Kiselyova E.Yu. Tomsk Polytechnic University, 30, Lenin Ave., Tomsk, 634050, Russia Siberian State Medical University, 2, Moskowskii Tr., Tomsk, 634050, Russia E-mail: [email protected] The problem of non-invasive methods of fetal monitoring, with the possibility of correction of the state, is still relevant, in connection with the progression of pathology of pregnancy. Prolonged stress regulatory systems of the mother leads to depletion of adaptive reserves, disruption of physiological rhythms and mechanisms of regulation that can not affect the functional status of the fetus. In this regard, an urgent problem of modern medicine is the development of techniques of correction of the pregnant woman based on accessing natural resources of the human body. One such method is to control with biofeedback training. Biofeedback - a method of medical rehabilitation, in which a person using electronic devices instantly and continuously provided with information about the physiological performance of his internal organs by light, sound, visual and tactile feedback signals. Based on this information, people can learn to arbitrarily change these under normal conditions of intangible parameters. The purpose of this study is: Evaluating the effectiveness of biofeedback training for pregnant women when monitoring the status of the mother-fetus Tasks: 1. Development and implementation of the algorithm biofeedback training for pregnant women as a software application. 2. Research on groups of pregnant women with follow-up evaluating the effectiveness of biofeedback-training.

56

Developed software application for biofeedback-training, providing management during the session heart rate due to respiratory arrhythmia, which is a good indicator of quality respiratory patient Fig. 1.

Fig. 1. Form the main form of biofeedbacktraining. If the patient achieves the maximum heart rate fluctuations during breathing, it is called diaphragmatic breathing, relaxation, which increases the oxygenation of the blood and helps prevent fetal hypoxia. In addition, the developed software application allows you to evaluate the functional status of the fetus at each stage of biofeedback-training. Biofeedback training technique. In the group to assess the effectiveness of the training includes pregnant women with gestational age 32-35 weeks, with physiological, not pathological mother and fetus. For more effective

Section V: The Use of Modern Technical and Information Means in Health Services learning management skills HR needs at least 2 training accomplishments with a break of 2 days. The room for the biofeedback training should be at room temperature, insulation, quiet atmosphere without the annoying factors (strangers, conversations). The woman must be in a comfortable chair in front of the monitor. The abdominal wall are superimposed five abdominal electrodes, greased conductive gel. The instructor gives general advice on the implementation of the session. Next, the patient begins to do the job. The structure of the script. The total duration of the session - 18 minutes. The beginning of the session. Check the contact with the device. Duration of the stage - at the discretion of the instructor. Step 1. Record before the session. Screensaver with text and voice instructions for general relaxation, followed by video images of relaxing the content. Images are selected on the subject "Nature", in the color scheme, aimed at calming the nervous system: yellow - green and blue - blue. This step allows us to write the initial state of the patient and fetus. The duration of treatment - 3,5 min. Step 2. Instructions 1. The patient in the form of speech offered the job by breathing to try to control an animated picture "flower". Duration of this stage - 30 seconds. Step 3. Session 1. At this stage the problem of the patient to verify the possibility of changes in heart rate and select the most effective way to impact. On the screen is a flower, which, depending on the patient's heart rate ranged from (average + 2 * standard deviation) before (Average - 2 * standard deviation) is in the dissolved state or closed. Each change of heart rate is accompanied by the sound a note of the piano. The same patient can monitor the heart rate, which is displayed on the screen. Duration of this stage - 2 minutes. Step 4. Rest 1. After the session the patient with an audio message informing about the rest, displays a video sequence of relaxing the content of "Nature". Duration of this stage - 40 sec. Step 5. Instruction 2. The patient presented in the form of a sound job of managing the animated picture "animal", it is necessary for the animal to run faster. Because session is aimed at increasing heart rate, the patient is warned about the possibility of dizziness. Step 6. Session 2. The patient tries to increase heart rate relative to the background value and hold at this level. The patient presented on the screen of an animal that can run faster with an increase in heart rate in the range of (Average + 2 * standard deviation) before (Average - 2 * standard deviation). Another stimulus - the sound, it is necessary to achieve maximum volume activating nature of the music. Duration of this stage - 1 min.

Step 7. Rest 2. After the session the patient with an audio message informing about the rest, displays a video sequence of relaxing the content of "Animals". Duration of this stage - 40 sec. Step 8. Instruction 3. The patient presented in the form of a sound job of managing the animated picture "person", it is necessary that the person smile, if done correctly set the music becomes louder. Step 9. Session 3. The patient tries to lower the heart rate relative to the background value and hold at this level. For the patient displayed a person who feel sad when an increase in heart rate and smiles with a decrease in heart rate, heart rate range of (Average + 2 * standard deviation) before (Average - 2 * standard deviation). Necessary make maximum volume of the relaxing music. Duration of this stage - 2 minutes. Step 10. Rest 3. After the session the patient with an audio message informing about the rest, displays a video sequence of relaxing the content of "Kids". Duration of this stage - 40 sec. Step 11. Instruction 4. The patient presented in the form of a sound job of managing the animated picture "The steam locomotive, steam locomotive to be moved from station to station. Step 12. Session 4. The patient tries to control heart rate relative to the background values in both parties, to raise and lower the heart rate for maximum respiratory cardiac arrhythmia. The patient presented on-screen animated picture "The locomotive, which moves from station to station, depending on heart rate, heart rate range of (Average + 2 * standard deviation) before (Average - 2 * standard deviation). Duration of the stage - 2 minutes. Step 13. Recording after the session. Patient with audible message informing that you can relax after the session. The screen displays the content of video sequence of relaxing, "Water-Plants. " Duration of the stage - 3,5 min. Step 14. Finish. Screensaver with text about the end of the session and applause as a reward for a job. Duration of the stage - 15 sec. After each stage of training is evaluated several characteristics of the state of the mother and fetus on the screen and stored in the database: Mo, DX, AMO, IN, HR. These characteristics reflect the state of the autonomic nervous system of mother and fetus. Each of the calculated parameters in the analysis cardiointervalograms attached to a specific physiological meaning [Baevsky R. M, 1995]. Based on the data to monitor the dynamics of indicators of maternal and fetal between each stage of the training, as well as between sessions. The effectiveness of biofeedback-training can be estimated on the basis of the parameters characterizing the stress level of the fetus (Mo, DX, AMO, IN, fetal heart rate) and functional status of the mother (Mo, DX, AMO, IN, HR mother).

57

XVII Modern Technique and Technologies 2011

58

Section VI: Material Science

Section VI

MATERIAL SCIENCE

59

XVII Modern Technique and Technologies 2011

ANALYSIS OF THE CORROSION RESISTANCE OF STEEL GROUPS 316 AND 317 Bekterbekov N.B. Scientific adviser: Trushenko E.A. candidate of technical science Tomsk Polytechnic University, 634050, Tomsk, Russia E-mail: [email protected] 316, 316L & 317L (UNS S31600) / (UNS S31603) / (UNS S31703) Chromium-Nickel-Molybdenum 1 General Properties Russian equivalent of Standard: AISI 316 – 10X17H13M2, AISI 316L – 03X17H14M3, AISI 316Ti - 10X17H13M2T. Type 316 austenitic stainless steel is a commonly used alloy for products that require excellent overall corrosion resistance. Alloys 316 (UNS S31600), 316L (S31603), and 317L (S31703) are molybdenumbearing austenitic stainless steels which are more resistant to general corrosion and pitting/crevice corrosion than the conventional chromium-nickel austenitic stainless steels such as Alloy 304. These alloys also offer higher creep, stress-torupture, and tensile strength at elevated temperatures. Alloy 317L containing 3 to 4% of molybdenum is preferred to Alloys 316 or 316L, which contain 2 to 3% of molybdenum in applications requiring enhanced pitting and general, corrosion resistance. In addition to excellent corrosion resistance and strength properties, 316, 316L Alloys, and 317L Cr-Ni-Mo alloys provide excellent fabric ability and formability, which are typical of the austenitic stainless steels. Table.1 Chemical composition as represented by ASTM A240 and ASME SA-240 specifications are indicated in the table below. Percentage by Weight (maximum unless range is specified) Element Alloy 316 Alloy 316L Alloy 317L Carbon 0.08 0.030 0.030 Manganese 2.00 2.00 2.00 Silicon 0.75 0.75 0.75 Chromium 16.00/18.00 16.00/18.00 18.00/20.00 Nickel 10.00/14.00 10.00/14.00 11.00/15.00 Molybdenum 2.00/3.00 2.00/3.00 3.00/4.00 Phosphorus 0.045 0.045 0.045 Sulfur 0.030 0.030 0.030 Nitrogen 0.10 0.10 0.10 Iron Bal. Bal. Bal. 2 Resistance to Corrosion General Corrosion Alloys 316, 316L, and 317L are more resistant to atmospheric and other mild types of corrosion than 18-8 stainless steels. In general, media that do not corrode 18-8 stainless steels will not attack

60

these molybdenum-containing grades. One exception is highly oxidizing acids such as nitric acid to which molybdenum-bearing stainless steels are less resistant. Alloys 316 and 317L are considerably more resistant to sulfuric acid solutions than any other chromium-nickel types. At temperatures as high as 120°F (38°C), both types have excellent resistance to higher concentrations. Service tests are usually desirable as operating conditions and acid contaminants may significantly affect corrosion rate. If there is condensation of sulfur-bearing gases, these alloys are much more resistant than other types of stainless steels. In such applications, however, acid concentration has a marked influence on the rate of attack and should be carefully determined. Molybdenumbearing Alloys 316 and 317L stainless steels also provide resistance to a wide variety of other environments. As shown by the laboratory corrosion data below, these alloys offer excellent resistance to boiling 20% phosphoric acid. They are also widely used in handling hot organic and fatty acids. This is a factor in the manufacture and handling of certain food and pharmaceutical products where molybdenum-containing stainless steels are often required in order to minimize metallic contamination. In general, Alloy 316 and 316L grades can be considered to perform equally well for a given environment. The same is true for Alloy 317L. A notable exception is in environments sufficiently corrosive to cause inter granular corrosion of welds and heat-affected zones on susceptible alloys. In such media, Alloy 316L and 317L grades are preferable for the weld condition since low carbon levels enhance resistance to inter granular corrosion. Table.2 Corrosion Resistance in Boiling Solutions Table.2 Corrosion Resistance in Boiling Solutions Corrosion Rate in Mils per Year (mm/y) for Cited Alloys Boiling Alloy 316L Alloy 317L Test Solution Base Base Welded Welded Metal Metal 20% 0.12 0.12 0.48 0.36 Acetic (0.003) (0.003) (0.012) (0.009) Acid 45% 23.4 20.9 18.3 24.2 Formic (0.594) (0.531) (0.465) (0.615) Acid 1% 0.96 63.6 54.2 51.4 Hydrochl (0.024) (1.615) (1.377) (1.306) oric Acid

Section VI: Material Science 10% Oxalic Acid 20% Phosphor ic Acid 10% Sulfamic Acid 10% Sulfuric Acid 10% Sodium Bisulfate 50% Sodium Hydroxid e

48.2 (1.224)

44.5 (1.130)

44.9 (1.140)

43.1 (1.094)

0.60 (0.15)

1.08 (0.027)

0.72 (0.018)

0.60 (0.015)

124.2 (3.155)

119.3 (3.030)

94.2 (2.393)

97.9 (2.487)

635.3 (16.137)

658.2 (16.718)

298.1 (7.571)

356.4 (9.053)

71.5 (1.816)

56.2 (1.427)

55.9 (1.420)

66.4 (1.687)

77.6 (1.971)

85.4 (2.169)

32.8 (0.833)

31.9 (0.810)

Pitting/Crevice Corrosion Resistance of austenitic stainless steels to pitting and/or crevice corrosion in the presence of chloride or other halide ions is enhanced by higher chromium (Cr), molybdenum (Mo), and nitrogen (N) content. A relative measure of pitting resistance is given by the PREN (Pitting Resistance Equivalent, including Nitrogen) calculation, where PRE = Cr+3.3Mo+16N. PREN of Alloys 316 and 316L (24.2) is better than that of Alloy 304 (PREN = 19.0), reflecting better pitting resistance which 316 (or 316L) offers due to its Mo content. Alloy 317L, with 31.% Mo and PREN = 29.7, offers even better resistance to pitting than 316 alloys. Alloy 304 stainless steel is considered to resist pitting and crevice corrosion in waters containing chloride up to about 100ppm. Mobearing Alloy 316 and Alloy 317L, on the other hand will handle waters with chloride up to about 2000 and 5000 ppm, respectively. Although these alloys have been used with mixed success in seawater (19,000 ppm chloride), they are not recommended for such the use. Alloy 2507 with 4% Mo, 25% Cr, and 7% Ni is designed for use in salt water. Alloys 316 and 317L are considered adequate for some marine environment applications such as boat rails, hardware, and facades of buildings near the ocean, which are exposed, to salt spray. Alloys 316 and 317L stainless steels all perform without evidence of corrosion in the 100-hour, 5% salt spray (ASTM B117) test. Inter granular Corrosion Both Alloys 316 and 317L are susceptible to precipitation of chromium carbides in grain boundaries when exposed to temperatures in 800 to 1500°F (427 to 816°C) temperature range. Such "sensitized" steels are subject to intergranular corrosion when exposed to aggressive environments. Where short periods of exposure

are encountered, however, such as in welding, Alloy 317L with its higher chromium and molybdenum content, is more resistant to inter granular attack than Alloy 316 for applications where light gauge material is to be welded. Heavier cross sections over 7/16 inch (11.1 mm) usually require annealing even when Alloy 317L is used. For applications where if heavy cross sections cannot be annealed after welding or if low temperature stress relieving treatments are desired, low carbon Alloys 316L and 317L are available to avoid the hazard of inter granular corrosion. This provides resistance to inter granular attack with any thickness in the as-welded condition or with short periods of exposure in 800 to 1500°F (427 to 826°C) temperature range. If vessels require stress-relieving treatment, short treatments falling within these limits can be employing without affecting normal excellent corrosion resistance of the metal. Accelerated cooling from higher temperatures for the "L" grades is not need when very heavy or bulky sections are annealed. Alloys 316L and 317L possess the same desirable corrosion resistance and mechanical properties as the corresponding higher carbon alloys and offer an additional advantage in highly corrosive applications where inter granular corrosion is a hazard. Although short duration heating encountered during welding or stress relieving does not produce susceptibility to inter granular corrosion, it should be noted that continuous or prolonged exposure at 800 to 1500°F (427 to 826°C) temperature range can be harmful. Stress relieving in the range between 1100 to 1500°F (593 to 816°C) may also cause some slight embrittlement of these types. Table.3 Inter granular Corrosion Tests ASTM A 262 Evaluation Test Practice B Base Metal Welded Practice E Base Metal Welded Practice A Base Metal Welded

Corrosion Rate, Mils/Yr (mm/a) Alloy 316

Alloy 316L

Alloy 317

36.(0.9)

26 (0.7)

21 (0.5)

41 (1.0)

23 (0.6)

24 (0.6)

No Fissures on Bend Some Fissures on Weld

No Fissures

No Fissures

No Fissures

No Fissures

Step Structure

Step Structure

Ditched (unacceptable)

Step Structure

Step Structure Step Structure

References: 1. Steel Glossary; American Iron and steel Institute (AISI) Retrieved, October 21, 2008. 2. Denny A. Jones, Principles and Prevention of Corrosion, 2nd edition, 1996, Prentice Hall, Upper Saddle River, NJ. ISBN 0-13-359993-0

61

XVII Modern Technique and Technologies 2011

MULTISCALE TECHNIQUE FOR LOCALIZED STRAIN INVESTIGATION UNDER TENSION OF CARBON FIBER REINFORCED COMPOSITE SPECIMENS WITH EDGE CRACKS BASED ON DATA OF STRAIN GAUGING, SURFACE STRAIN MAPPING AND ACOUSTIC EMISSION Burkov M.V., Byakov A.V., Lyubutin P.S. Scientific adviser: Panin S.V., PhD, professor. Institute of Strength Physics and Materials Science SB RAS 634021, Russia, Tomsk, Akademicheskiy ave, 2/4 E-mail: [email protected] Introduction Different destructive and non-destructive techniques are applied to investigate the processes of deformation and fracture. Special place among the non-destructive ones is occupied by methods to detect changes directly during the process of loading. Combined application of these methods, which depend on the operating principle are sensitive at different scale levels, can provide complete picture of the process. So the combination of strain measurement techniques and acoustic emission (AE) was used in Chaplygin SibNIA [ ]. Such an approach, based on the use of television-optical measuring system – TOMSC, was proposed by Academician V.E. Panin [ ], which also link characteristic stages of «= - =» curve with peculiarities of deformation at mesoand macroscale levels during stressing of heterogeneous materials. Combining AE, television-optical measuring system (method of digital images correlation (DIC)) and strain gauging can simultaneously detect the localization of deformation and fracture at different scales. The principal problem in this case is: under what conditions localization is accompanied by increased values of the informative parameters reflecting the development of deformation at the micro, meso and macro scales. Such parameters for the AE method can be counting rate dN/dt, or the activity dN=/dt; for surface strain mapping - the intensity of shear strain =, for strain gauging d=/dt, time or strain derivative of the external applied stress. Convenient and intuitive method to identify activation of deformation processes is the identification of characteristic stages of deformation and fracture associated with the relevant mechanisms, carriers, and deformation structures [ ]. In our previous studies aluminum specimens with different types of stress concentrators were tested [ , ]. Materials and research technique A combined method to investigation of localized deformation processes in notched carbon fiber reinforced composites is applied in order to reveal characteristic stages of strain and fracture. Stress concentrators have the shape of edge crack with ~1 mm width and 14,5, 18 и 21,5 mm depth.

62

Use of simultaneous registration has allowed us to register and compare parameters under analysis during entire time of the experiments. Specimen scheme is presented on Figure 1 (with thickness of 4 mm.). Material is pseudoisotropic composite made of unidirectional carbon fiber layers [45=, -45=, 0=, 90=] sintered in carbon matrix. Dimensions of specimens were taken according to ASTM E1922 (Standard Test Method for Translaminar Fracture Toughness of Laminated Polymer Matrix Composite Materials).

Figure 1. Specimen scheme, dash line indicates region assigned for calculation of shear strain intensity. Specimens were stretched under static uniaxial tension at Instron 5582 electro-mechanical testing machine with loading rate of 0,3 mm/min. Surface imaging was carried out by Canon EOS 550D digital photo camera. The camera has been equipped by telephoto lens Canon EF-S 100400mm f/4-5.6 IS. Registration of acoustic emission (АE) signals was performed by PC-based hard-software measuring technique [ ]. For analysis of acoustic emission data, the derivative on AE events accumulation over loading time was calculated as the basic informative parameter of АE data (acoustic emission activity dN=/dt). A certain region of the image acquainted was determined for calculation of the average value of shear strain intensity. The area of the image with the size of 3300×3900 pixels (the physical sizes ~35×41,5 mm) was taken (Figure 1). The size of the regions for strain estimation was chosen in order to ensure observation of formation and development of macro scale shear–bands.

Section VI: Material Science Also, patterns of localized strains were modeled with use of ANSYS design package Results Figure 2 shows «= - =» graph and its time derivative d=/dt. With increasing of notch dimension elongation at fracture is decreased while ultimate strength has an opposite trend. Using procedure of linear approximation, three stages can be marked out. Variation of notch depth brings changes to shape of the d=/dt curve. At a depth of 14,5 mm curve can be divided on 3 characteristic stages. At a depth of 18 mm 1-st and 2-nd stages become less notable, and with notch depth of 21,5 mm averaged d=/dt curve is almost a straight line. Time, sec 0

100

200

300

400

500

600

35

I

II

30

0.25

1

25

0.20

20 0.15 15

2

dσ/dt

σ, MPa

0.30

III

the third stage, according to d=/dt and =dif, activity of acoustic emission is approximately on the constant level, and then AE activity starts to increase, up to fracture. Character of the averaged AE activity curve remained constant, while changing notch depth, but the value of AE activity decreased. Conclusion Combination of strain gauging, DIC and AE data to be used allows examine stage patterns for deformation processes development at various scale levels. Application of combined method to investigation of localized deformation processes is an actual problem, because composite materials, especially made of high-strength carbon fibers, have constantly raising part in mechanical engineering, mainly in such durability critical sectors like aircraft building. That non-destructive testing technique of structural materials can be applied for creation of onboard inspection devices for highly loaded aircraft components.

0.10 10 0.05

5 0 0.0

0.00 0.2

0.4

0.6

0.8

ε, %

1.0

1.2

1.4

Figure 2. Loading diagram (1) and time derivative of stress d/dt (2), notch depth 14,5 mm. Analysis of strain distribution at mesoscale level was performed by image processing by integral and differential methods. Figure 3 (curve 3) shows the shear strain intensity =dif, obtained by differential image analysis for the specimen with notch depth of 14,5 mm. Shear strain intensity curves of specimens with different notch dimensions insignificantly differs from each others. dNAE /dt 100

0.20

3 0.15

80

dσ/dt

60

2

0.10

40 0.05

20

1 0.00 0

100

200

300

400

500

0 600

γdif

0.00060 0.00058 0.00056 0.00054 0.00052 0.00050 0.00048 0.00046 0.00044 0.00042 0.00040 0.00038 0.00036

Time, sec

Figure 3. Graphs of time derivative of stress (1), acoustic emission activity dNAE/dt (2) and shear strain intensity dif, calculated by differential technique (3), notch depth 14,5 mm. In accordance with data analysis and correlation of strain gauging and surface strain mapping methods acoustic emission registration data have been processed interpreted in terms of AE activity dNAE/dt (Figure 3, curve 2). Obtained data have been averaged by smooth curve. Before

References: 1. Ser'eznov A.N., Stepanova L.N., Tikhonravov A.B. et al. Application of the acoustic-emission and strain-gaging methods to testing of the residual strength of airplanes. // Russian Journal of Nondestructive Testing – 2008 – №2 – p.28-35 2. E.E. Deryugin, V.E. Panin, S.V. Panin, V.I. Syryamkin, Method of nondestructive testing of mechanical condition of objects and device for its implementation. Patent Russian Federation №2126523. Invention bulletin, №5, 20.02.99 3. Klyushnichenko A.B., Panin S.V. Starcev O.V., Investigation of deformation and fracture at the meso- and macroscale levels of reinforced plastics under static and cyclic tension.// Phys. Mesomech. – 2002 – V. 5 №5 – p. 101-116. 4. S.V. Panin, A.V. Byakov, V.V. Grenke et al, Multiscale investigation of localized plastic deformation in tension notched D16AT specimens by acoustic emission and opticaltelevision methods. // Phys. Mesomech. – 2009 – V. 12 - №6 – p. 63-72. 5. Panin S.V., Bashkov O.V., Semashko N.A. et al, Combined research of deformation features of flat specimens and specimens with notches at the micro- and mesolevel by means of acoustic emission and surface deformation mapping.// Phys. Mesomech. – 2004. – V. 7. – № 2. – p. 303-306. 6. S.V. Panin, A.V. Byakov, M.S. Kuzovlev, et al oth., Testing of automatic system for registration, processing and analysis of acoustic emission data by model signals.// Proceedings IFOST’2009, 21-23 October, 2009, Ho Chi Ming City, Vietnam, Vol.3, p. 202-206.

63

XVII Modern Technique and Technologies 2011

COMPUTER SIMULATION OF MODE OF DEFORMATION IN MULTILAYER SYSTEMS. FINITE-ELEMENT METHOD A.A. Fernandez, V.E. Panin, G.S. Bikineev Scientific advisor: Dr. D.D. Moiseenko, Ph.D Tomsk polytechnic university, 634050, Russia, Tomsk, Lenin av., 30 Institute of strength physics and materials science SB RAS, 634021, Russia, Tomsk, Academichesky av., 2/4 [email protected] 1. Introduction «Impact» is a program designed to be a free and simple alternative to the advanced commercial Finite Element codes available today. The guideline during the development of the program has been to keep things clear and simple in design. «Impact» has been designed to be easily extendible and modular to enable programmers a way to easy add features to the program without having to enter other parts of the code. «Impact» has been written in Java. This choice of language may seem strange at first, but with the recent development of Java engines, speed penalty is not that significant. On the other hand, the Object Oriented features and the high portability of Java is a clear advantage for the future. «Impact» is a Finite Element Code which is based on an Explicit Time stepping algorithm. These kinds of codes are used to simulate dynamic phenomena such as car crashes and similar, usually involving large deformations. There are quite few explicit codes around which might seem strange since the other cousins (implicit finite elements) are quite common. The implicit codes are used to simulate static loads in structures. Something that explicit codes do does not manage very well. «Impact» is written in Java for two reasons: 1. Java is an Object Oriented language and that suits Finite Element Programming perfectly; 2. Java is clean, simple and extremely portable. At the moment, «Impact» can only handle dynamic incompressible problems. Examples of problems with this kind of limitation are basically most real world dynamic problems. The following is a list of problems that «Impact» will be able to solve in the future: 1. Collisions of any type; 2. Forming operations; 3. Dynamic events such as chassis movement etc. 2. Theoretical Base The explicit code is based on the simple formula of F=M*A where F represents the force, M is the mass of a body and A is the resulting acceleration of that body. All the code does is to calculate the acceleration of a body using small time step to translate this acceleration into a little displacement

64

of the body. This displacement is then used to calculate a responding force since the body is elastic and can be stretched (thus creating a reaction force). This force is then used to calculate the acceleration and then the process is repeated again from the beginning. As long as the time step is sufficiently small, the results are accurate. 3. Modelling principles The starting point for the user is «Pre Processor». It is used for: 1. Creating geometry through the use of points, curves, surfaces and volumes; 2. Creation of finite element models by meshing of curves, surfaces and volumes; 3. Setting of loads and boundary conditions; 4. Setting of solver parameter values such as time step etc.; 5. Exporting of files «.in», which are input files for the solver. «Pre Processor» operates on a full 3D view, which can be zoomed and rotated using the third mouse button either alone or in combination with CTRL and/or SHIFT key. «Pre Processor» works with two types of graphical objects: Geometry and Mesh. The geometry is CAD geometry but with build in mesh attributes. A curve for example can have a mesh attached to it. It can also have a material and a thickness, which is automatically transferred to the mesh. On the top left side is a selection menu of the «Graphics mode». Several options are available such as «Surface», which displays a shaded model. «Wireframe» is faster since no shading occurs. «Solid» is used for completely shaded view. To generate a model the user should start with points and then create curves based on these points. Finally, surfaces should be created based on the curves. If a point is later moved, the curve based on this point will be changed. By double clicking on «Geometry» the attributes of that geometry will appear on the edit field in the lower left corner. The user can change any attribute and press «Update» to modify the model. The mesh of a surface is automatically based on the mesh of the curves, which create the surface. If the mesh is modified on a curve, the mesh on the surface is also changed.

Section VI: Material Science The «Processor» realizes the calculation. It consists of a prompt window where the solver printout is shown, an editor where the input file can be modified and a model viewer where the model described by the «.in» file can be seen and rotated. The starting point is the «.in» file, which has been saved from the «Pre Processor» (or one written by hand). This file must be loaded into the «Processor» by the «Open model» button. The solution process is then started by the «Start/Stop» button. The results will be automatically written to the «.flavia.res» file, which can be loaded into the «Post Processor». «Post Processor» is used to view the results from the solver. These results are saved in a file ending with «.flavia.res» and consist of multiple time steps, which can be selected on the left hand side of the viewer. Here you can also decide what should be viewed. 4. Numerical experiment On the base of the proposed algorithm the numerical experiment was realized. In the framework of the experiment uniaxial loading of the composition «aluminium substrate – intermediate layer – ceramic coating» was simulated. The specimen had sizes 40 mm Х 20 mm Х 10 mm, the thickness of coating was equal to 2 mm, the interlayer thickness equaled 2 mm, the substrate thickness – to 6 mm. The intermediate layer represented part of specimen between substrate and coating where for each elementary volume simulated with the help of finite element the values of the modulus of elasticity, the density, the Poisson’s ratio, the yield stress and the modulus of plasticity were assigned. The values of each of these parameters of finite element were uniformly distributed in the interval between the values of corresponding parameter for coating and substrate. Simulated specimen was effected by tension along axis X during 1 second (see pic. 1). The stress at each facet was equal to 4,85 Pa.

Pic. 1. The scheme of specimen loading.

Pic. 2 illustrates patterns of distribution of the values of the strain intensity

σI

εI

and the stress

at the interface of ceramic coating intensity and intermediate layer. The results of the numerical experiments realized basing on classical mechanics of behaviour of dynamic systems and the finiteelement method show that heterogeneities of interface properties existing in every real system generate quasiperiodic distribution of stresses and strains near the interface.

a

b Pic. 2. Distribution of the values of the strain intensity (а) and the stress intensity (б) at the interface «ceramic coating – intermediate layer». Peaks of the stress are strong concentrators determining cracking and flaking of coating. Areas of maximal normal tensile stresses are centres of generation of nanopores in local volume of material. Then this reconstruction of internal structure makes the material ready for formation of microcracks in regions of change of sign of the moment stress values. At mesoscale level agglomeration of cracks occurs that leads to generation of macrocracks propagating along directions of maximal tangent stresses. Surface layer of material is fragmented by quasiperiodic cracks; and after increasing of loading the fragments of coating will be flaked in regions of maximal tensile stresses perpendicular to the interface.

65

XVII Modern Technique and Technologies 2011

CALCULATION AND THEORETICAL ANALYSIS OF PREPARING TWO-COMPONENT SHS-SYSTEMS K.F. Galiev, M.S. Kuznetsov, D.S. Isachenko Principle investigator: А.О. Semenov, assistant Language supervisor: A.V. Tsepilova, teacher Tomsk Polytechnic University, 30, Lenina St., Tomsk, Russia, 634050 E-mail: [email protected] Introduction According to the development program of nuclear industry of Russia in 2007…2010 and until 2015 have been approved by the Russian government planned to implement an accelerated development of the nuclear power industry to ensure the country's geopolitical interests. This includes the issue of creating new materials for nuclear power plants for various of goods. One technology is self-propagating hightemperature synthesis (SHS). This synthesis method has some specific features that distinguish it from existing methods for producing inorganic compounds: high temperatures and short times of synthesis, the small energy consumption, simplicity of equipment, the ability to manage the process of synthesis, and as a consequence, production of materials with a given combination properties[1]. Fundamentally the following ways to control the SHS are [2]: 1. Management in preparation of the bland. 2. Management during the process, which includes a thermal heating system. 3. Management during cooling of finished products consisting in changing the temperature regime of cooling and the type of atmosphere. When management of the synthesis is the actual problem that needs a preliminary calculation and theoretical analysis parameters for the initial batch of components and of the process of SHS. To solve this problem should be modeling the main factors management self-propagating hightemperature synthesis. Calculation and theoretical definition of the fundamental features of the SHS For determine of the principal features of SHsynthesis is carried out computational and theoretical analysis based on the determination of adiabatic combustion temperature Tад of SHS materials. Value

Tад in combination with extensive

experimental studies of SHS of different classes of materials gives the opportunity to ask criterial values of adiabatic temperature [3]: • Tад < 1000 K combustion system is absent and the synthesis is not possible; • Tад > 1000 K the reaction of combustion is in the system;

66

• 1000 K < Tад < 2000 K the further research are necessary. The main condition for determining the adiabatic temperature is the equality of the enthalpies of the starting materials H at the initial temperature T0 and of the final products at the adiabatic temperature. It means that all emitted by the reaction heat Q goes into heating the combustion products from the initial temperature to the combustion temperature and can be represented as: n ( H (T ) − H (T ) ) = Q ,

∑ i =1

ад

0

i

where n – number of precursors mixture of the component. To solve this equation using a method based on the quantum Debye model that allows relating the specific heat with the parameters of the initial mixture of components in contrast to the classical model [4].

Fig 1. Dependence of the heat capacity for tungsten boride, calculated using the Debye model (1) and the empirical method (2). Curves presented in Fig. 1 show a satisfactory agreement at low and medium temperatures compared with the classical method. In addition, the Debye model has no restriction in the field of high temperature and allows relating the heat capacity with the parameters of the state of the synthesized sample.

Section VI: Material Science Modeling the dynamics of temperature fields in the directed synthesis For modeling the dynamics of temperature fields is required to solve the heat equation:  ∂ 2T 1 ∂T ∂ 2T  qV ∂T , a ⋅  ∂r 2 + r ⋅ ∂r + ∂z 2  + C (T ) ρ = ∂t   where a – thermal diffusivity coefficient; ρ – density of the sample;

different parts is about 5 to 12%, which agrees well with an error of calculation due to an error of the mathematical model used in the calculations of two-component systems.

qV – volumetric heat

source. Equation is a boundary problem with the boundary and initial conditions: 1. λ ∂T

∂r

∂T λ ∂t

r=R

= ±α (Tr = R − Ts ) ± εσ (Tr4= R − Ts4 ) ,

=0 ; r =0

2. λ ∂T

∂z

∂T λ ∂t

z =0

z=H

= ±α (Tz = H − Ts ) ± εσ (Tz4= H − Ts4 ) R ,

= TГ ,

where λ – thermal conductivity coefficient; α – heat transfer coefficient; ε – coefficient of – Stefan"blackness" of the surface; σ Boltzmann constant;

Ts – ambient temperature;

TГ – preheating temperature of the sample; R – sample radius; H – height of the original sample. On the basis of the calculated data laboratory experiments on the synthesis of tungsten boride were conducted. Implementation of modeling the main factors of management the SHS as an example of synthesis of tungsten boride Material used as a control and protection system is a tungsten boride. The synthesis of materials based on tungsten boride was carried out by the following reaction: W + B → WB Fig. 2 shows the thermogram of the combustion system tungsten-boron with equal initial conditions. The measurements were performed for the central point of the cylindrical sample. There is a satisfactory agreement between the experimental and calculated data. The difference amounts to

Fig. 2. Experimental thermograms SHS system WB

and

calculated

Satisfactory agreement between calculated and experimental data at this stage suggests the correctness of the numerical methods and the possibility of calculating the other two-component SHS systems. REFERENCES 1. A.G. Merzhanov, B.I. Khaikin. Combustion of a substance with a solid reaction layer. // Reports of the Academy of Sciences CAS. – 1967. Т. 173. – № 6. – P. 1382–1385. 2. A.G. Merzhanov. Self-Propagating HighTemperature Synthesis / Physical Chemistry: Modern Problems. Yearbook. Ed. Y.M. Kolotyrkin Moscow: Khimiya, 1983. - P. 6 - 45. 3. V.I Boiko, D.G. Demyanyuk, O.Y. Dolmatov, I.V. Shamanin, D.S. Isachenko. Self-Propagating High-Temperature Synthesis of absorbing material for nuclear power plant // Proceedings of the Tomsk Polytechnic University. - 2005. - T. 308. № 4. - P. 78-81. 4. E.A. Levashov, A.S. Rogachev, V.I. Yuhvid, I.P. Borovinskaya. Physical chemical and technological bases of self-propagating hightemperature synthesis.- Moscow: Publishing Bean, 1999. 176.

67

XVII Modern Technique and Technologies 2011

INCREASE OF OPERATIONAL PROPERTIES OF POWDER PAINTS BY NANOPOWDERS INTRODUCTION AND PLANETARY-TYPE MILL PROCESSING Ilicheva J.A., Yazikov S.U1. Scientific adviser: Yazikov S.U1 Tomsk Polytechnic University, 1SPC «Polus»

Painting with powder paint-and-lacquer materials (PLM) represents one of the most perfect coating technologies which meet modern requirements. At present this technology has been introduced practically in all branches of industry. Nanopowders introduction in a powder paint production is aimed at increasing quality and expanding the range of powder coatings application, namely, using them for on-board equipment of space vehicles painting. Therefore a determining factor at choosing a coating system is its ability to protect a painted object in operation conditions during the required period. It is necessary to receive durable coatings, applying high-quality PLM, modern equipment, methods of surface treatment and paint spraying. One of such methods is painting of various surfaces, designs, products with PLM powders which provide anticorrosive protection of a product. They have a number of advantages in comparison with liquid PLM: - Production of coatings with high physicalmechanical, chemical, electrical insulating, protectively-decorative properties; - Greater thickness of one layer coating in comparison with liquid paints, which demand several layers painting; - Safety of work conditions with PP and their storage, absence of solvents; - Ecological safety; - Adaptability to manufacture, i.e. full automation of coatings manufacture; - Profitability as paint recycling is easily provided. However, polymeric powder paints presented in the market, do not satisfy a number of technological requirements in wear resistance, fragility, strength and functional characteristics. Due to nanopowders introduction in powder paint production it is possible to get painted objects with qualitatively new properties: wear resistance increase, maintenance of specific superficial electric resistance, heat conductivity properties, etc. Nanoparticles introduction in powder PLM was primary made by the method of dry mixing: nanopowder is mixed up with a basis of readymade powder paint until a homogeneous mixture is produced. Advantage of such production is its

68

simplicity. However application spheres of a received product are limited, as dry mixing of various materials (particles which differ in diameter, morphology, density) can cause their stratification or division. Besides, it is practically impossible to reuse the paint collected in a recuperator, as the coating, will have big difference in colour, in comparison with a fresh paint. There are other negative effects of dry mixing. We offer the technology of powder compositions production modified by nanopowders (PCMN). It’s unique in the way of uniform nanoparticles (distribution) with particles of a polymeric paint at polymerization (film formation), i.e. nanoparticles are located at regular intervals in the whole volume that provides its high physicalmechanical properties. The process of the modified polymeric paint is shown in (fig.1).

Fig. 1. The scheme of powder compositions production The powder paint arrives at a planetary-type mill where particles of a paint are crushed to the demanded size (fig. 2). As a result the total area particles surface increases. Theoretical calculations show, that at (30)- thirty-minute of milling the value of the area increases approximately twice (fig. 3).

Fig. 2. Dependence of particles size on milling time

Section VI: Material Science

Fig. 3 Dependence of particles area on milling time When the milling is over there is still a small percent of large fraction¬ in the paint .To remove particles more than 30 microns in diameter the mix is driven to the aerodynamical classifier. Subsequent mixing of nanopowder with a polymeric paint occurs in a pneumogun where fluidized bed is made from of a paint and nanopowder particles by means of compressed air. As a result of friction against each other and chamber walls paint particles are statically charged, and enveloped by nanopowder particles at regular intervals. The reason of substantial improvement of PCMN properties in comparison with initial powder paint, is, that nanoparticles are located densely

enough and at regular intervals they settle down between larger particles of a polymeric paint. At the subsequent painting a partial pore filling occurs that protects a substrate from water and other aggressive liquids and consequently, improves anticorrosive properties. Thus the following results are obtained: First, in all cases when nanopowders of oxide, nitrides and pure metals are used wear resistance of coatings increases. Secondly, there is a possibility to produce coatings with a set of specific superficial electric resistance. Besides, coatings can be given other special properties. Now further work on coatings tests at initial conditions, and after modification are being carried out. References: 1. Стокозенко В.Н. Нанотехнологии сегодня и завтра. Промышленная окраска. 2006. № 3. С. 22-24. 2. Sawitowski T. Europ. Coat. J. 2005. №3.-Р.101 c. 3. Порошковые краски. Технология покрытий: Пер. с англ. под ред. проф. А.Д. Яковлева-СПБ: ЗАО «Промкомитет», Химиздат, 2001.-256с.:ил.

THE IMPACT OF THERMOCHEMICAL TREATMENT ON WEAR-RESISTING QUALITIES OF CAST IRON Kuszhanova A.A. Science supervisor – Sharaya O.A., candidate of technical science, PhD. Karaganda State Technical university Kazakhstan, Karaganda, Bulvar Mira Street, 56 E-mail: [email protected] Nowadays the development of metallic materials with brand-new properties for mechanical engineering and oil and gas industry is becoming one of the most relevant issues. The solution to the issues lies in a complex approach combining the principles of formation of the material chemical composition and structure by means of technological process development of its hardening treatment. Physical-chemical methods of impact on the material surface hold a special place among hardening technologies as the surface condition defines the level of durability and operating properties of machine details.

In most cases it is the surface that is exposed to excessive wear, contact loads and destruction due to corrosion. Producing of hardened surface layers is achieved through targeted formation of target structural condition of metal with help of thermochemical methods. Processes of modifying impact on the surface result in the changing of structure and phase composition of the surface layer, which helps to obtain new properties. On the basis of hardening treatment processes for products made of steel and cast iron the most promising methods are the following:

69

XVII Modern Technique and Technologies 2011 1) technologies of inner saturation with interstitial elements, for instance, nitrogenization, carbonitration; 2) plasmatic and laser treatment by means of formation of developed dislocation structure, substructure, extra-fine grain; 3) combined methods of the surface hardening when the structure being formed provides inclusion of maximum number of hardening mechanisms. This work researches the structure and properties of grey and high-strength cast iron after carbonitration. Carbonitration is a thermochemical treatment with simultaneous saturation of the product surface with nitrogen and carbon from non-toxic melts of cyanate salts. The essence of the method lies in the following: the instrument and machine details are exposed to heat in melts of cyanate salts at the temperature of 540-580 °C with a holding time of 5-40 minutes for the instrument and 1-3 hours for machine details. In liquid condition the components mutually dissolve, eutectics of the composition is 8% of K2CO3 and 92% of KCNO which crystallizes at the temperature of 308 °C. The diagram shows that melts containing 030% K2CO3 and 100-70% KCNO can be used for carbonitration at the temperature of 540-580 °C. According to D.A.Prokoshkin it is efficient to use a bath consisting of 75-80% of potassium cyanate and 1-20% of potassium carbonate (potash). The larger content of potash leads to its deposition out in the form of solid phase, the melt thickens and becomes useless. At the temperatures of carbonitration process potassium cyanate interacts with air oxygen: 2KCNO + O2 ↔ K2CO3 + CO +2Nат. (1), and forms carbonic oxide and atomic nitrogen. Carbonic oxide dissociates on the metal surface: 2СО↔ СО2 + С (2), with emission of active carbon. The process of carbonitration is widely used for hardening of metal-cutting instruments made of rapid steels. At present the structure and properties of cast iron after carbonitration have not been fully studied and the character of interaction under physicalchemical treatment mainly depends on the product material. The object of the research is the samples of grey cast iron 25 and high-strength iron 60 after carbonitration. The typical view of cast iron microstructure after carbonitration is shown in picture 1. There is a dark zone on the surface followed by non-mordanting light layer divided by the visual border from the matrix. Graphite inclusions, piercing the whole layer, come out to the surface.

70

а)

(б) Picture 1 – microstructure of grey cast iron 25 (а) and high-strength iron 60 (б) after carbonitration In the process of carbonitration cast iron is saturated with nitrogen, carbon and oxygen. Cast iron is a multicomponent melt on the basis of iron with silica, sodium, iron, oxygen in chemically bonded and free condition in the form of graphite. Below in picture 2 it is possible to see distribution of chemical elements, the analysis is made on electronic scanning microscope “ Vega// Tescan”. The interaction among cast iron elements and saturating components in carbonitration process is of complex character which depends on the thermodynamical activity of elements. The research of element distribution in the cast iron surface layer after carbonitration has been carried out by the micro X-ray spectral method on the settings “EMAX-8500E” and “Camebax-MBX”.

Picture 2 – Chemical analysis of iron on electronic scanning microscope “ Vega// Tescan”

Section VI: Material Science The increase in the temperature of carbonitration leads to the increase of microhardness of all examined samples. However, high microhardness on the surface can result in spalling of the hardened layer in the operational process. Therefore, the carbonitrated layer has to possess plasticity. High microhardness in combination with good plasticity is the essential condition for providing high wear-resistance of cast iron. This work has tested wear-resisting qualities of samples after different types of thermochemical treatment. Ni-carbing and bath nitriding have been chosen among all applicable methods of thermochemical treatment for cast iron products as being the closest to the process of carbonitration.

Ni-carbing has been carried out in gas mixture of ammonia and exogas at the temperature of 590°C during 6 hours. Samples in the process of bath nitriding have been saturated in salt at the temperature of 570°C during 2 hours. Higher wear-resistance of cast iron after carbonitration in contrast with ni-carbing, especially under heavy loads, can be explained by plasticity of carbonitrated layer and good conformability of rubbing surfaces. The batch of dog rings for an automobile ZAZ968 has been carbonitrated in specially designed mandrel at the temperature of 560°C during 3 hours. Benchmark trials and road tests have shown increase in their wear-resistance in 2.6 times against unhardened ones.

A SYNTHESIS OF POROUS OXYNITRIDE CERAMICS BY SELF-PROPAGATING HIGH-TEMPERATURE SYNTHESIS. THE INFLUENCE OF AL2O3 DILUTION RATE ON SHS PARAMETERS Maznoy1 A.S., Kazazaev2 N.Yu. Scientific adviser: Kirdyashkin1 A.I., PhD 1. Department of Structural Macrokinetics, TSC SB RAS, 634055, Russia, Tomsk, 10/3 Akademicheskii Avenue 2. Tomsk State University, 634050, Russia, Tomsk, 36 Lenin Avenue E-mail: [email protected] Introduction. Ceramics are extensively used is an interaction between aluminium and water. It has been found experimentally that 17.9% of for production of porous penetrable materials aluminium is required. The preforms are because of their high strength, wear resistance, combustion synthesized, which involves mass and resistance to aggressive media. However, it is transfer between a porous body of preforms and known that introduction of nitrogen into ceramic nitrogen, which a priori requires a connected-pore structures considerably improve their operational system penetrating the entire volume of the characteristics. β-SiAlON is a kind of oxynitrides material. and is most commonly described by the formula Si6-ZAlZOZN8-Z, where the Z value can be varied Investigation techniques. We found that SHS from 0 to about 4.2. Sialon ceramic materials have of preforms with the composition Si4Al2O2 (basis low thermal conductivity and high resistance to of β-SiAlON with Z=2) did not lead to a high thermal shocks, and can serve as heat-insulating, nitrogen saturation degree – we got only 0.46. structural, and filtering materials under the (Nitrogen saturation degree is defined as a ratio conditions of heat cycles, high temperatures, and between nitrogen trapped in the volume of corrosive media. One of the advanced methods for preforms resulting from CS and nitrogen required production of porous penetrable materials from oxynitrides is the method of Self-propagating Highfor the total conversion of nitride-generating temperature Synthesis (SHS) also called reagents. Assume that only silicon and aluminium Combustion Synthesis (CS) [1]. In the centre of powders react with nitrogen, low-probability attention of our studies there are ways of synthesis reaction of silicon oxynitride formation was not of porous oxynitride ceramic SHS-materials on the taken into account.) It is explained by the presence basis of Tomsk oblast silica-alumina raw materials, of melt regions in the preform structure. The silicon and aluminium powders. maximal temperature was fairly high and fusible Casting technique may be used for production components of the preforms melted to form of alloyed regions. Nitrogen cannot penetrate into of highly porous preforms from reagents for consequent combustion synthesis of oxynitride those regions. A XRD analysis shows β-SiAlON ceramics. Porous space of preforms is formed by phases, but the residual components of the charge gassing in the volume of slurry. In our case, there are also present.

71

XVII Modern Technique and Technologies 2011 Additives are usually used to decrease the maximal reaction temperature and/or to separate silicon grains in order to improve their reactivity in the liquid state. Therefore, we studied how the dilution rate depends on CS parameters. From the point of view of cost-effective production, using sialon powders or silicon nitride powders as dilatants is not desirable [2]. We used an alumina as a dilution agent.

In Figures 2, 3, 4 are respectively shown the rate of synthesis, maximal temperature of the combustion wave, and nitrogen saturation degree depending on the Al2O3 dilution rate of the charge. Rate, mm/sec 0.70 0.65 0.60 0.55

The starting materials used were: 1. Aluminium ASD-4 (DAv=10 µm); 2. Silicon dust CR-1 (DAv=10 µm); 3. Kaolinite clay produced by the company «TGOK «Il’menit» (DAv< 63 µm); 4. Alumina, (DAv=10 µm). The experimental procedures were as follows: 1. Alumina was put in addition to the mass of reaction charge (the charge was kept constant at the level 70 g), which were thoroughly mixed according to the formula Si4Al2O2. That is, the mass of the diluted preforms increases with increasing dilution rate by the value of the rate; 2. Water/solid ratio for temper of charge was 0.625; 3. Slurry was cast in a cylindrical mold with V=105.62 cm3 (D=41 mm, H=80 mm); 4. Porous preforms were produced using endothermic sponging of the slurry in a muffle furnace with a programmed heating controller in the air (Know-how). The preforms were further roasted in the muffle at 600 0C for 45 minutes each; 5. Combustion synthesis of preforms was performed in an autoclave in the nitrogen atmosphere at 8 MPa pressure; 6. The maximal temperature of combustion wave was estimated using a W-Re thermocouple. The combustion rate was calculated as height-to-time ratio. It was found that during roasting of the preforms (Fig. 1) that the higher the degree of dilution, the higher the weight loss because of hydrate elimination. This fact can be explained by the difference between the thermal expansion coefficients of the charge components. During roasting, micro-cracking of the perform porous skeleton occurred with the formation of open porosity structures.

0.50 0.45 0.40 0.35 0

5

10

15

20

25

Al2O3 Dilution Rate, mas% overweight

Fig.2. Combustion rates versus Al2O3 dilution rate. o

Tem, C 2100 2000 1900 1800 1700 1600 1500 1400 0

5

10

15

20

25

Al2O3 Dilution Rate, mas% overweight

Fig.3. Maximal SHS temperatures versus Al2O3 dilution rate. The CS rate and the maximal temperature decrease with increasing the dilution rate. The maximum nitrogen saturation degree 0.55 was reached for the 15% dilution rate. When the dilution rate was higher than 15%, the nitrogen saturation degree started decreasing. This occurred because of the change in the combustion wave type. We observed a spin regime and a selfoscillation regime rather than the conventional layer-by-layer regime. Some of the samples were even not synthesized in the self-propagation mode. A considerable variation of the maximal temperature for the same synthesis conditions can be explained by the position of the thermocouple tip in the pore structure of the preform. The temperature measured in a pore is lower than that measured at the contact of the skeleton material. Nitrogen saturation degree 0.56

Roast drying, mas% weight loss -4.0

0.55

-4.2

0.54 -4.4

0.53 -4.6

0.52 -4.8

0.51

-5.0 -5.2

0.50

-5.4

0.49

-5.6

0.48 0

5

10

15

20

25

Al2O3 Dilution Rate, mas% overweight

Fig.1. Perform weight loss versus Al2O3 dilution rate.

72

0

5

10

15

20

25

Al2O3 Dilution Rate, mas% overweight

Fig.4. Nitrogen saturation degree versus Al2O3 dilution rate.

Section VI: Material Science Conclusion. We have shown that: 1). The rate of weight loss during preform roasting increases with the Al2O3 dilution rate despite the overall increase in the density of the preforms; 2). The maximal temperature and the rate of combustion synthesis are significantly reduced with increasing of dilution rate. A change of the synthesis regime from layer-by-layer to selfoscillating or spin was observed; 3). By reducing the influence of coagulation of a low-melting component, an increase in nitrogen saturation degree up to 0.56 was revealed. To increase the nitrogen saturation degree we are intend undertake further studies as to how nitrogen pressure affects on the CS, and assess the prospects of using special fluoride additives [3]

facilitating the process of nitrogen infiltration into the reaction zone. References 1. Maznoy A.S. «Prospects for resourcesaving synthesis of advanced ceramic materials on the basis of Tomsk oblast raw materials» // Proceedings of the 16th International Scientific and Practical Conference of Students, Post-graduates and Young Scientists «Modern technique and technologies MTT’ 2010» (April 12 - 16, 2010 Tomsk, Russia) p. 58-60. 2. N. Pradeilles et al. «Synthesis of b-SiAlON: A combined method using sol–gel and SHS processes» /Ceramics International 34 (2008) 1189–1194. 3. Y.Chen et al. «PTFE, an effective additive on the CS of silicon nitride» / Journal of the European Ceramic Society 28 (2008) 289-293

MODERN APPLICATION OF HYDROXYAPATITE N.A. Nikiteeva, E.B. Asanov, L.A. Leonova Scientific supervisor: L.A. Leonova, PhD Language consultant: A.E. Marhinin Tomsk Polytechnic University, 634050, Russia, Tomsk, Lenina, 30 E-mail: [email protected] Introduction Calcium hydroxyapatite (HA) is the main mineral of bones and hard tissues, inorganic part of human bone contains 97% calcium hydroxyapatite [1]. According to numerous researches calcium hydroxyapatite has several advantages over the others, used in food industry and dietary supplements, calcium-supplementation and differs significantly higher rates of effectiveness and digestibility. Just the development of work in the synthesis and study of the structure of bioceramic materials based on hydroxyapatite resulted in the creation of new bioactive materials. These materials are fully compatible with the tissues of the human body, also these materials is not rejected by the body and stimulate the growth of bone tissue. Their using will lead to fundamental changes in the situation in reconstructive surgery, dentistry and traumatology. The purpose of this article is to identify the significance of hydroxyapatite in various spheres of human life. In line with the purpose research objectives defined: Identify a core range of potential users of hydroxyapatite and industries to use it.

Fig 1. Calcium hydroxyapatite The main part Nowadays Russia is experiencing an acute shortage of drugs and dietary supplements containing calcium. Deficiency of calcium leads to the fact that every year in Russia, about 1 million people suffer from diseases associated with calcium deficiency, and their number in 2000 increased from 26 million to 33 million now in Russia, 75% of children under age 10 suffer from osteopenia, 49% of Russians under 16 years of age and 10.5 million Russian citizens over age 50 suffer from osteoporosis [2]. As can be seen, the problem of rehabilitation of Russia's population is directly related to the elimination of calcium deficiency.

73

XVII Modern Technique and Technologies 2011 According to nutritionists, the extent of calcium deficiency can be judged from the fact that every Russian today, on average, receives about 30% of the required amount of calcium it. This means that Russia should produce and import about 70 tons / year of substances containing calcium, and among them no less than 6 thousand tons / year high-tech products for the treatment of osteoporosis, 5 tons / year of expensive composites for dentistry and 2, 5 tons / year of stimulants of bone tissue regeneration to treat 0.5 million Russians every year come to hospitals because of injuries. At the present time Russia produces and imports more than 10% of the required amount of calcium. In this case, is mainly used, the products of foreign firms, resulting in Russia fell into a "calcium" dependence on foreign pharmacopoeia. All this points to the need for fundamental changes in the design and manufacture of Russian calcium preparations [3]. Preparations with HA may be the solution to this problem and motivate the consumer (as a dietary supplement) for building strong bones and muscles, correct diet, strengthen and maintain the immune system, recovery sleep, stress and a tendency toward depression, as well as Athletes and those engaged in hard physical work and vice versa, leading a sedentary lifestyle. As in the case of insufficient calcium and phosphorus occur: nervousness and irritability, fatigue, weakness, bone fragility, eczema, insomnia, high blood pressure, localized numbness or tingling in hands or feet, muscle pain, decrease in liver function, seizures or loss of consciousness; delirium, depression, heart palpitations, cessation of growth, sore gums and tooth decay. Finally, develop diseases such as osteoporosis, arthritis, allergies, its complications, rickets, skin disorders (itching, eczema, psoriasis), parathyroid dysfunction, hepatitis and toxic liver damage, increased permeability of blood vessels, pneumonia, pleurisy, endometriosis, depression, insomnia, cramps and restless leg syndrome, dental caries, periodontal disease. HA, not being a drug can be a carrier of drugs. Nanostructured hollow particles based on hydroxyapatite can be loaded with various substances, such as anti-inflammatory drugs, collagen or bone morphogenetic proteins, which will promote healing of bone injuries [4]. Consumers in the field of reconstructive surgery, orthopedics, traumatology, dentistry and cosmetology GA can be used as a durable ceramic pieces that can be performed in any form, or in the form of large fragments of bone, beads, allows you to fill bone cavities and defects, fine powders, employees filling in the mouth and compositions that are intended to change the color of teeth, as

74

well as filling materials, and gels with high content of active amorphous HA. Using nanodispersed hydroxyapatite during formation of coating materials of implants and prostheses leads to modification of surface properties of metal, which creates conditions for the adhesion, migration and cell growth and promotes the integration of coating materials with bone tissue.

Fig 2. Mechanism of accretion of bone tissue with a biomaterial coatings based on HA dental implant A fundamentally new form of HA are considered so-called "bone cement". Their advantage over its predecessors is the ability to make changes to some of the characteristics of the material itself during the cooking process operations, since they have enough time for this set of initial strength (about 10-15 minutes after mixing). This allows you to fill bone defects of any shape. Plasticity cement gives us the opportunity to manufacture it in the planning period of operation in any form [5]. It is assumed that the rapid healing of the bone occurs due to partial dissolution of HA coating, which leads to increased concentrations of calcium and phosphorus in the environment that promotes the launch of the new formation around the implant of hydroxyapatite microcrystals. In turn, they are integrated with collagen and the type of «creeping» osteogenesis performs rapid formation of highgrade bone. It should be noted that the hydroxyapatite accelerates the initial biological response to metallic implants, in particular, made of titanium. It is expected that after some time, a layer of HA will be fully or partially dissolved, and the titanium in this period will form a strong bond with the bone, almost the same as hydroxyapatite. It is known [6, 7] that hydroxyapatite has sorption properties to a variety of cations and anions, including heavy metals and radionuclides. Since the recent emphasis on environmental issues, many researchers consider the possibility of its use for the sorption of heavy metals during cleaning of various objects of the environment, to accumulation of radionuclides from the environment, as well as in the field of radioactive waste disposal. In addition, there work on the use of hydroxyapatite as a pharmaceutical preparation for heavy metal poisoning. Due to the ability of hydroxyapatite to prevent penetration into the skin of pollutants that cause irritation - beauticians began to pay attention to the HA for the preparation of creams and emulsions.

Section VI: Material Science medically pure hydroxyapatite, the study of its properties and expand the range of its validity.

Fig 3. Nanodispersed HA Conclusion Calcium hydroxyapatite is a unique material, it proved the breadth of its application in various fields: dentistry, tissue engineering, surgery, cosmetology, as well as industry. Recently, more and more studies are in receipt of a chemically and

References 1. Hench L. Bioceramics // J. Amer. Ceram. Soc. 1998. – Vol. 81. – № 7. – P. 1705–1728. 2. Anikin S.G. //Medical advise. 2010. №7-8 3. Uvarova U. //Remedium Privolzhe. 2010. №8 4. Ming-Yan Ma, Ying-Jie Zhu, Liang Li and Shao-Wen Cao. // J. Mater. Chem. – 2008. - Vol. 18 - P. 2722-2727. 5. Barinov SM., Komlev VS Bioceramics based on calcium phosphates. – M.:Nauka, 2005. – 204 p. 6. Suzuki B.T., Hatsushika T., Miyake M. // J. Chem. Soc., Faraday Trans. – 1982. – Vol. 78. – P. 3605–3611. 7. Suzuki B.T., Hatsushika T., Miyake M. //J. Chem. Soc., Faraday Trans. – 1984. – Vol. 80. – P. 3157–3165.

INFLUENCE OF COPPER AND GRAFT-UHMWPE ONTO THE WEAR RESISTANCE OF UHMWPE MIXTURE Piriyayon S. Scientific adviser: Panin S.V., PhD, professor. Institute of Strength Physics and Materials Science SB RAS 634021, Russia, Tomsk, Akademicheskiy ave, 2/4 E-mail: [email protected] Introduction UHMWPE comes from a family of polymers with a deceptively simple chemical composition, consisting of only hydrogen and carbon. However, the simplicity inherent in its chemical composition belies a more complex hierarchy of organizational structures at the molecular and supermolecular length scales. At a molecular level, the carbon backbone of polyethylene can twist, rotate, and fold into ordered crystalline regions. At a supermolecular level, the UHMWPE consists of powder (also known as resin or flake) that must be consolidated at elevated temperatures and pressures to form a bulk material. Further layers of complexity are introduced by chemical changes that arise in UHMWPE due to radiation sterilization and processing. UHMWPE (Ultra high molecular weight polyethylene) is one kind of thermoplastic polyethylene. It is widely used in orthopedic surgery for joints replacement in orthopedic application due to its good process ability, very low friction coefficient, high impact resistance, high resistant to abrasion, very low wear, chemical resistance and biocompatibility. It is odorless, tasteless, and nontoxic. However, even though UHMWPE has very low wear compared to other polymers wear is still a major problem in

tribotechnical applications. A lot of attention recently has been paid to increasing the strength and wear resistance of composite polymeric materials. Traditionally, strength and wear resistance of polyolefins are increased by the addition of micron size reinforcement particles obtained from inorganic material. Recently, intensive investigations have been carried out to explore the possibility to add nano-sized fillers due to theirs redundant surface energy (they have very high surface energy). The small size of the filler particles can provide a very fine and uniform structure in the UHMWPE specimens. Materials and research technique UHMWPE powder with particle size of 50-70 µm (GUR by Ticona, Germany) was used for the specimen preparation. The molecular weight of the UHMWPE powder used is 2.6×106 g/mol. Preparation of UHMWPE-g-SMA We employed graft UHMWPE with anhydride and carboxyl functional groups realized by modification of the polymers in reacting gases (UHMWPE -g-SMA by GoC “Olenta”, Russia). It was assumed that grafting will provide adhesion between UHMWPE particles. UHMWPE-g-SMA and UHMWPE were mixed using a high speed

75

XVII Modern Technique and Technologies 2011 homogenizer in dry form. After mixing, UHMWPE and its mixture powder was used to prepare test piece specimens by using a compression machine and, subsequently, a hot-pressing mould. The compression pressure was 10 MPa and the temperature was maintained at 190ºC for 120 minutes. Specimens were cooled in the mould at a cooling rate of 3- 4ºC/min. The specimens shape was in the form of a rectangular prism 45 mm long, 50 mm wide and from 5 to 8 mm high. The mixture was pure UHMWPE with 0, 3, 5, 10 and 20 wt% of UHMWPE-g-SMA. This material is denoted as UHMWPE-g0, 3, 5, 10, 20 respectively. And then add 0.5% Cu (nanosize) to their mixture. Wear tests were performed using a “SMT-1” friction machine. Tests were run without lubrication according to ASTM G77. Specimens shape was in the form of a rectangular prism 7 mm long, 7 mm wide and 10 mm high, the roller diameter was 62 mm, the revolution rate was 100 rpm, and the applied loading was set to 160 N. Show in Fig.1.

Fig. 1. Block on Roller test a) two specimen during testing with “SMT-1” machine b) size of specimen for wearing test. Images of wear track were investigated by shooting micrographs using an optical microscope “Carl Zeiss Stemi 2000–C” and measuring track area with the help of software Rhinoceros, v3 [7]. Show in Fig.2.

Fig. 2. Wear track after test a) wear track was shown from an optical microscope “Carl Zeiss Stemi 2000–C” b) measuring wear track area by “Rhinoceros, v3”

Fig. 3. Wear track of UHMWPE-g0 + 0.5%Cu after test by “SMT-1” machine a) time = 10 minute b) time = 60 minute c) time = 120 minute d) time = 180 minute.

Fig. 4. Wear resistance of UHMWPE-g-SMA + UHMWPE mixture. The wear resistance of mixtures is increase when UHMWPE-g-SMA was mixed with UHMWPE. UHMWPE-g10 has stable wearing at steadystate wearing stage t=90-180 min (Fig. 4). One can distinguish two pronounced portions at wearing diagram of UHMWPE-g3, UHMWPE-g5, UHMWPE-g10 and UHMWPE-g20. But in fact, steady-state wearing starts after 60 min. of loading. In doing so, wear resistance of UHMWPEg10 is few time higher in contrast with four other specimens.

Results The wear track area of the mixture becomes wider when the testing time is increased. The color of the mixture is dark and has some remainder polymer at the edges of wear track. It had shown on the figure number 3.

Fig. 5. Wear resistance of UHMWPE-g-SMA + UHMWPE + 0.5%Cu mixture.

76

Section VI: Material Science The wear resistance of mixtures is increased when UHMWPE-g-SMA is mixed with UHMW-PE. and 0.5%Cu. UHMWPE-g3 + 0.5%Cu has stable wearing at steady-state wearing stage t=70180 min (Fig. 5). One can distinguish two pronounced portions at wearing diagram of UHMWPE-g3 + 0.5%Cu and UHMWPE-g0 + 0.5%Cu. But in fact, steady-state wearing starts after 70 min. of loading. Conclusion The wear resistance of UHMWPE + UHMWPE g-SMA specimens is increased when UHMWPE g-SMA is mixed with UHMWPE powder. However, when mixed with Cu nanosize, it can improve the wear resistance of the mixture. The wear track area for UHMWPE-g10, UHMWPE-g3 + 0.5%Cu specimen is lower at the steady state wearing stage in each group of mixture. Wear resistance of the specimen is nearly equal to one of the nonmodified specimen. Next steps can fill other filler to

increase the resistance.

strength,

hardness

and

wear

Reference 1. Wang H, Fang P, Chen Z, Wang S, Xu Y and Fang Z, Polymer Int. 57:50 (2008). 2. Steven M. Kurt, The UHMW-PE handbook, Elsevier 2004, p 109. 3. Oklopkova A.A., Popov S.N., Sleptzova S.A., Petrova P.N., Avvakumov E.G. Polymer nanocomposites for tribotechnical applications. Structural chemistry, 45 (supplement), S169-S173: 2004. 4. Oklopkova A.A., Petrova P.N., Sleptzova S.A. and Gogoleva O.V. Polyolefin composites for tribotechnical application in friction unite ofautomobiles. Chemistry for sustainable development, 13, 793-799 : 2005. 5. Andreeva I.N., Veselovskaya E.V., Nalivaiko E.I., et al. Ultrahigh molecular weight polyethylene of high density. Leningrad: IzdatelstvoKhimia (Chemistry); 1982.

THE EFFECT OF ELECTRON BEAM IRRADIATION ON WEAR PROPERTIES OF UHMWPE T. Poowadin, L.A. Kornienko, M.A. Poltaranin Scientific adviser: Panin S.V., PhD, professor. Institute of Strength Physics and Materials Science SB RAS 634021, Russia, Tomsk, Akademicheskiy ave, 2/4 E-mail: [email protected] Abstract Electron beam irradiation dose of 25–300 kGy was applied to modify Ultra high molecular weight polyethylene (UHMWPE) that is the theme of our work. Many studies have been carried out to improve wear properties of UHMWPE by the introduction of crosslink to the chain structure of UHMWPE by means of electron beam radiation. It is a well known that increasing crosslink density increase wear resistance and oxidative resistance. However it can reduce the mechanical properties of UHMWPE. In our work, all specimens were investigated under dry condition of “block-on-roller” wear tests. The results obtained with irradiated samples exhibited high wear resistance relative to increasing of radiation dose. Furthermore, the three-dimensional profilometer was employed to study the changes of the surface layer of irradiated specimens. Keywords: UHMWPE; electron beam; crosslink; wear resistance; nanohardness

Introduction Nowadays, polymer materials are present in almost all fields, as for example, automotive industry, agricultural engineering, food processing, medical prosthesis, aerospace and so on. These materials have been recognized for their resistance to wear and have been developed continuously. UHMWPE is a member of the family of polyethylene in which the polymer is formed from ethylene (C2H4). It is being increasingly used in industry as components or parts of machines because of its unique properties of high abrasion resistance, high impact strength, very low friction coefficient, very low wear, chemical resistance and biocompatibility. [1] However, even though UHMWPE has very low wear compared to other polymers. Wear is still a major problem in tribological applications. A lot of attention recently has been paid to improve tribological properties and wear resistance of UHMWPE by means of various radiation e.g., gamma (=) and ultraviolet (UV) rays, X-rays and electron beam [2,3]. The effect of electron beam irradiation on physical properties of UHMWPE has

77

XVII Modern Technique and Technologies 2011 been reported by Kim and colleagues in 2005 [4]. Electron beam irradiation dose of 50–500 kGy was applied to modify UHMWPE in air and N2 environment. They found that the crystallinity increases with the increase of absorbed irradiation dose up to 200 kGy. Comparative fatigue resistance behavior of electron beam and gamma Irradiated UHMWPE has been reported by Urries and colleagues in 2004 [5]. Electron beam at 50, 100, and 150 kGy compare with 25 kGy gamma irradiated UHMWPE were investigated. The experimental results show that crystallinity increased with the dose, and the wear resistance increased in compared with nonirradiated samples. Similarly, for mechanical properties and wear improvements, electron beam irradiation dose of 50–150 kGy irradiated UHMWPE have been reported in 2009. Visco and colleagues[6] suggested that electron beam irradiated UHMWPE at temperature of 110oC produces a high amount of crosslinks and improves polymeric tensile and wear resistance. Many studies in the literature confirm that electron beam radiation increase the wear resistance of UHMWPE. In this paper, Electron beam irradiation dose of 25, 50, 150 and 300 kGy were applied to modify UHMWPE in air. All specimens were investigated under dry condition wear tests, in order to estimate the effect of electron beam irradiation on wear properties of UHMWPE. Experimental Materials and specimens preparation UHMWPE powder with particle size of 50-70 µm (GUR by Ticona, Germany) was used for the specimen preparation. The molecular weight of the UHMWPE powder used is 2.6×106 g/mol. UHMWPE powder was used to prepare test piece specimens by using a compression machine and subsequently, a hot-pressing mould. The compression pressure was 10 MPa and the temperature was maintained at 190ºC for 120 minutes. Specimens were cooled in the mould at a cooling rate of 3-4ºC/min. The specimens shape was in the form of a rectangular prism 45 mm long, 50 mm wide and from 5 to 8 mm high. In case of radiation treatment, specimens were irradiated at a dose of 25, 50, 150 and 300 kGy with 1.0–2.0 MeV. Electron beams. Wear and optical profilometer tests Wear tests were performed using the “SMT-1” friction machine. Tests were run without lubrication according to ASTM G77. Specimen shape was in the form of a rectangular prism 7 mm long, 7 mm wide and 10 mm high, the roller diameter was 62 mm, the revolution rate was 100 rpm, and the applied loading was set to 160 N. Images of wear track were investigated by shooting micrographs using an optical microscope “Carl Zeiss Stemi 2000–C” and measuring track area with the help of software Rhinoceros, v.3

78

Worn surfaces were investigated by Zygo New View 6000 three-dimensional profilometer to examine the surface roughness of the specimens. Results and Discussion Wear resistance property The experimental result of wear tests shows that wear resistance of irradiated UHMWPE specimen is increased with increasing of the radiation dose. As shown in Figure 1. wear intensity at the dose of 300 kGy. estimated as 3 times lower in comparison with pure UHMWPE.

Figure 1. Wear intensity of UHMWPE specimens with different dose of electron beam irradiation. The pictures from optical microscope “Carl Zeiss Stemi 2000–C” are shown in Figure 2 with increasing dose of electron beam radiation at the block-on-roller wear tests. It was found that the edge of wear track area of pure UHMWPE having a worn film much more than irradiated UHMWPE. This object related to wear resistance of UHMWPE. pure

EB 25

EB 50

EB 150

EB 300

Figure 2. Worn surface of UHMWPE specimens with different dose of electron beam irradiation. Surface roughness The experiment results from three-dimensional profilometer (Fig.3) show that the surface roughness on worn surface area of irradiated UHMWPE is slightly decreased with increasing of

Section VI: Material Science the radiation dose. The results obtained related to wear intensity of specimens. The lowest value of surface roughness is reached at electron beam dose of 300 kGy. which is equal to 0.16 =m.

Figure 3. Relation between surface roughness and wear intensity of UHMWPE specimens. Conclusion The electron beam irradiation is effective on wear properties of UHMWPE. The influence of electron beam irradiation dose upto 300 kGy increase wear resistance of UHMWPE. It was found that a worn film at the edge of wear track

area is reduced after irradiated by electron beam radiation. Similar to the surface roughness results, it is slightly decreased with increasing of the radiation dose which is related to wear intensity of specimens. Wear intensity of irradiated UHMWPE at radiation dose of 300 kGy. is decreased up to 3 times when compared with pure UHMWPE. Reference [1] Steven M. Kurtz. The UHMWPE Handbook. Elsevier Academic Press. 2004. [2] R. L. Clough. Nucl. Instr. Meth. Phys. Res. B. 158 (2001). P. 8–33. [3] H. Zhang, M. Shi, J. Zhang and S. Wang. J Appl Polym Sci. Vol. 89 (2003). P. 2757–2763. [4] S. Kim, P. H. Kang, Y. C. Nho and O. B. Yang. J Appl Polym Sci 97 (2005). P. 103–116. [5] I. Urries, F. J. Medel, R. Rios, E. GomezBarrena and J. A. Puertolas. J Biomed Mater Res Part B: Appl Biomater 70B (2004). P. 152–160. [6] A. M. Visco, L. Torrisi, N. Campo, U. Emanuele, A. Trifiro and M. Trimarchi. J Biomed Mater Res Part B: Appl Biomater 89B (2009). P. 55–64.

ANALYSIS OF TUNGSTEN AND MOLYBDENUM POWDERS COMPACTION AND SINTERING D.D. Sadilov Scientific Supervisor: docent Matrenin S.V. Tomsk Polytechnic University, Russia, Tomsk, Lenin str., 30, 634050 E-mail: [email protected] Introduction Refractory metals and their alloys due to their high heat resistance, are increasingly used in many branches of industrial production: in space technology, missile-and aircraft industry, metallurgy, power, chemical industry. Due to the high melting temperature these materials and products are manufactured almost exclusively by means of powder metallurgy techniques [1, 2]. Thus there is the significant theoretical and practical interest to study the activation process of sintering of refractory metals in order to increase the density of sintered products, more fine-grained structure and to improve their performance. An effective method of activating the sintering process is the use of nanopowders. Pressing and sintering of nanopowders is significantly different from that of powders commonly used in powder metallurgy. This paper deals with processes of molding and sintering of tungsten and molybdenum nanopowders with additions of nickel nanopowder, evaluation of the structure and properties of sintered materials.

Experiment For research of nanopowders W, Mo and Ni particles with a diameter of 100nm were used [3]. Nanopowders were annealed in vacuum at 750°C for 2 hours. The powder mixture was prepared by mixing wet powders of W and Mo with the addition of 1 wt.% Ni nanopowder in alcohol and plasticization of rubber mixtures. Plasticized mixture was spun statically pressed in a steel mold under a pressure of 300MPa. Compacts were sintered in vacuum and ammonia glow-discharge plasma [4,5] at 1175... 1450°C. Isothermal holding time was 1hour. The following research methodologies: were used determination of bulk density, density of compacts, density of sintered samples was determined by hydrostatic weighing, microstructure, residual porosity, pore nature and distribution (metallographic microscope Alta M). Indentation was performed with the instrument Nano Indenter G200 (MTS Nano Instrumets, 701, Scarboro Road, Suite 100, Oak Ridge, TN 37830, USA). As the indenter Berkovich pyramid was

79

XVII Modern Technique and Technologies 2011 used, the load was 50g. The design of the device allows to display a chart of indentation on the monitor in real time. Primary data - the load and the depth of penetration. According to the chart the implementation unit automatically calculates the elastic modulus EIT and microhardness HIT. Table 1 shows the calculated density of compacts ρ and their relative density θ. Each of the three was obtained by pressing at the same compaction pressure. Table 1 Compact Density №

Compos ition

1 2 3 4 5 6 7 8 9 10 11 12

Mo-Ni

Mo

W

W-Ni

Table 2 Properties of samples sintered in vacuum № Comp ρ, θ, У, EIT, 3 osition t,°С MPa % % g/cm

ρ, g/cm3

θ, %

6,8

66

6,6 6,7 6,67 6,64 6,63 11,57 11,57 11,64 12,42 11,75 11,74

64 65 65 65 65 60 60 60 64 61 61

Fig.1 Relative density of samples from the homologous temperature sintering The compactions of nanopowder were not sintered completely at these temperatures, but adding Ni nanopowder rapidly activate sintering

HIT, MPa

1450

9,39

92

11, 2

264422

3178

1313

8,25

86

8,2

-

-

3

1175

7,65

75

5,2

-

-

4

1450

8,17

80

7,3

203251

2315

1313

7,67

74

5,4

-

-

6

1175

7,12

69

2,8

-

-

7

1450

11,53

60

0,2

-

-

1313

11,56

60

0,3

-

-

9

1175

11,62

60

0,4

-

-

10

1450

16,98

88

10, 5

322091

3426

11

1313

15,64

80

8,4

-

-

12

1175

14,47

74

7

-

-

1

Fig. 1 and table 2 shows the density and shrinkage of sintered compacts of the investigated compositions, as well as test data for indentation. For comparative evaluation the values of homologous temperature of sintering and relative density = are shown. Sintering temperature 1175=С correspond to homologous 0.4 for W and 0.5 for Mo, 1313=C - 0.45 and 0.55, 1450=C - 0.5 and 0.6, respectively.

80

process. Molybdenum compacts sintered at these temperatures, however, had a significant porosity. In this case Ni nanopowder additive significantly activated sintering. Modulus of elasticity EIT and microhardness HIT was determined only in samples which sintered densities were up to 80%.

2

5

8

MoNi

Mo

W

W-Ni

Table 3 shows the results of measuring the density and shrinkage of compacts sintered in ammonia glow-discharge plasma, their elastic modulus and microhardness. Tungsten sample without nickel addition, as in case of vacuum sintering, almost tough. It is obvious that temperature of 1450 = C is insufficient for solidphase sintering of W nanopowder. Addition of 1% Ni nanopowder activated tungsten sintering. Comparing the results of compacts sintering from W nanopowder in vacuum and in ammonia glowdischarge plasma showed that in the second case the samples had higher values of elasticity modulus and microhardness. This effect is explained by the fact that the powder compacts sintering in a glow discharge plasma is activated.

Section VI: Material Science Table 3 Properties of samples sintered in plasma Comp № У, ρ, g/ θ, EIT, HIT, ositio t,°С 3 MPa MPa % cm % n 1 Mo-Ni 2 3 4

Mo W W-Ni

1450

7,86

77

5,3

257450

3063

6,83

66

0

172696

1769

11,87

61

0

-

-

16,33

84

10,4

394156

4131

Conclusion The processes of formation and sintering of tungsten and molybdenum nanopowders with additions of nickel nanopowder were investigated. The density, shrinkage, elastic modulus and microhardness of sintered samples were researched. The positive effect of additives nickel

nanopowder on the compaction during sintering is. This leads to the increase in the mechanical properties of sintered refractory metals. References 1. N. Zelikman, B Korshunov Metallurgy of rare metals, Metallurgy (1991) 432. 2. B. Kolachev, V. Elagin,V. Livanov, Metallurgy and heat treatment of nonferrous metals and alloys, МISIS (2005) 432. 3. S. Matrenin, A. Ilin., A. Slosman, L. Tolbanova, Sintering of nanopowder iron // Advanced materials (2008) 81 – 87. 4. O. Nazarenko, Electroexplosive nanopowders: preparation, properties, applications, Tomsk (2005) 148 с. 5. A. Slosman, S. Matrenin, Electricdischarge sintering of ceramics based on zirconia // Refractory materials (1994) 24-27.

INVESTIGATION OF THE KINETICS OF DISSOLUTION OF GOLD IN AQUA REGIA. Savochkina E.V. , Bachurin I.A., Markhinin A.E. Scientific supervisor: Shagalov V.V. Tomsk Polytechnic University, 634050, Russia Tomsk, Lenin avenue 30 еkaterina_._89 @ mail.ru Gold was one of the first precious metals known to man since ancient times and nowadays gold is the most widely used one. Due to the rapid development of communications technology, electronics, aerospace and other industries interest in gold has been greatly increased. There is currently a large number of new gold alloys, as well as processes of coating with gold and obtaining multilayer materials. For the further development of spheres of metals application every year humanity investigates the properties of gold, trying to subdue the precious metal. The main property of the noble metals, including gold, are chemical inertness especially because of their ability to form oxygen compounds. Nevertheless, we can find information about gold dissolving in many sources. The aim of our work is to establish the kinetic features and the time of gold dissolving in aqua regia, because of the lack of such information in library resources, and also because of the complexity definitions of precious metals at the stage of preparation, the effectiveness of which is determined by the completeness and speed of transfer of metal in solution.

One of the first methods, which has doubted the inertness of gold, was the method of dissolution in aqua regia. "Aqua regia" is used during the process, which is a mixture of concentrated acids hydrochloric and nitric ones (HCI HNO3 , 3: 1 by volume). It is a yellow liquid with a smell of chlorine and nitrogen oxides.

Figure 1. glass with aqua regia. Aqua regia has a strong oxidizing ability. She, in particular, dissolves almost all metals, including precious metals such as gold, palladium and

81

XVII Modern Technique and Technologies 2011 platinum, despite the fact that none of the precious metal is not soluble in each of the acids contained in aqua regia, but taken individually . For example, nitric acid, hydrochloric acid acts as an oxidant: HNO3 + 3HCl = NOCl + Cl2 + 2H2O; During this reaction there are two active substances: chlorine and nitrozilhlorid. They can dissolve the gold: Au + NOCl2 + Cl2 = AuCl3 + NO; Another molecule of HCl attached the newly formed gold chloride. Formed H(AuCl4) *4 H2O, known in common parlance as "chlorine gold": AuCl3 + HC3 = H (AuCl4); This complex acid crystallizes with four water molecules in the form of H(AuCl4) *4 H2O. Its crystals are light yellow. Aqueous solution and is painted in a yellowish color. Further, if carefully heated H(AuCl4) *4 H2O, it decomposes with separation of HCl and reddish-brown crystals of gold chloride (III) AuCl3. When heated, all the gold compounds are easily decomposed with separation of metallic gold. Evaporation of acid solution separated in the form a red-brown crystals of H2 (PtCl6) *6 H2O. It is with special features and the origin of the name of aqua regia. Alchemists who searched for "Sorcerer's Stone", which could turn any metal into gold, considered the gold itself "the king of metals." And since gold - king of metals, and "water", which dissolves it should be the king of waters. Since the acid solution and called aqua regia (Latin aqua regia). Should have been called this liquid royal water, but the Russian language, unlike many other languages, it became "aqua regia". Aqua regia is used as a reagent in chemical laboratories for refining of gold (Au) and platinum (Pt), as well as for obtaining metal chlorides, and other purposes. It is curious that aqua regia does not dissolve rhodium (Rh), tantalum (Ta), iridium (Ir), Teflon, and some plastics. Thieves often use aqua regia (or separately concentrated nitric acid) to open the padlocks. Poured into the mechanism of the lock, wait a bit and just throws a hammer lock. Fans also use aqua regia to extract gold from electronic components. In our study we used aqua regia in the following ratio: the concentration of hydrochloric acid-28, 5%, the concentration of nitric acid-15% and gold of a spherical shape, having the following dimensions in the experiments: h = 3,45 mm. and d = 6,3 mm.; Gold was placed in ordinary glass with aqua regia during the experiments. Studies were carried out at 20,40,60 degrees Celsius with a measurement of state of gold of gold at the same time. The maximum duration of the experiments reached 1500 C. For the mathematical processing of the data, in order to determine the degree of transformation of

82

matter depending on time and temperature, the equation of shrinking sphere [ 1 - (1-α) 1 / 3 = kτ ] was used. The result is shown in Figure 1:

Figure 2. Graph the dependence of degree of dissolution of gold from of time in the coordinates of the equation shrinking sphere. The next stage of our work was the determination of the activation energy and in this case the Arrhenius equation was used: lnKt = lnK0 – Ea / RT; where Kt-temperature rate constant; K0-true rate constant, Ea - activation energy, R - universal gas constant, T - temperature of the reaction. Having determined the slope, we can express the apparent activation energy of the reaction of gold dissolution in aqua regia, which is shown in Figure 3: -8

LnK -9

-10

-11

-12

-13 2,9

3

3,1

3,2

3,3

3,4

3,5

1/T

Figure 3. Graph the dependence of the equilibrium constant of the value of inverse temperature

Section VI: Material Science We got the value of activation energy equal to 60.8 kJ. This value is in the area of chemical reaction, so it indicates that the limiting stage is the stage of a chemical reaction. Conclusions: In our work the kinetic characteristics of the process of gold dissolution in aqua regia at the temperature range 298-333K and the time interval from 50 to 1500 seconds were determined. Activation energy of dissolution was also determined, which was 60 kJ. It was established that the rate-limiting step of the process was the stage of a chemical reaction. Now we carry out the same researches of gold dissolution in iodide, bromide, thiosulfate solutions and in the solution of potassium tetrafluorobromide. After that we are going to compare the kinetic features of the above mentioned processes.

Literature: 1.Неорганическая химия: в 3 т. / Под ред. Ю.Д. Третьякова. Т. 3: Химия переходных элементов. Кн. 2 / [А.А. Дроздов, В.П. Зломанов, Г.Н. Мазо, Ф.М. Спиридонов]. – М.: Академия, 2007. –400 с. 2.Барре П. Кинетика гетерогенных процессов. – М.: Мир, 1976.– 399 с. 3. Mitkin V.N. Fluorination of Aurum Metal and its Application Possibilities in the Synthesis, Analysis and Recovery Technology for Secondary Raw Materials // Aurum: Proc. of Intern. Symp. TMS 2000. – Nashville: Tennessee, 2000. – P. 377–390. 4. V.N. Mitkin. / Fluorine Oxidants in the Analytical Chemistry of Noble Metals/ журнал аналитической химии, том 56, №2, 2001 5. Материалы сайта: http://kristall.lan.krasu.ru/Education/Aurum/aurum. html. http://www.xumuk.ru/encyklopedia/2/3685.html.

EFFECT OF MOLDING PRESSURE ON MECHANICAL PROPERTIES AND ABRASIVE WEAR RESISTANCE OF UHMWPE Sonjaitham N. Scientific adviser: Panin S.V., PhD, professor. Tomsk Polytechnic University, 634021, Russia, Tomsk, Lenin ave E-mail: [email protected] Introduction Ultra-high molecular weight polyethylene (UHMWPE) is a polymer with extremely high molecular weight. It possesses excellent wear resistance, high impact strength, good sliding quality, and low friction loss, and its self-lubrication performance can be widely used in engineering applications [1–4]. Often, machine parts such as bearings, gears, bushings, linings, chain guides, hoppers, and sprockets. All these applications are characterized mainly by their high demands on wear resistance[5]. Other than UHMWPE has been used as a replacement for cartilage in total joint prosthesis, such as hip/knee joint replacement because its good biological compatibility and high resistance to the biological environment [6]. The tribological behavior of UHMWPE is much influenced by its mechanical property. Therefore, many different methods have been applied to enhance the mechanical properties of UHMWPE. The process of consolidation in UHMWPE requires proper designation of pressure, temperature and time. Changes in these three molding variables can impact the mechanical properties of UHMWPE [7]. The aim of the work is to study the effect of molding pressure on mechanical properties and abrasive wear resistance of UHMWPE.

Materials and research technique UHMWPE powder with particle size of 50-70 µm (GUR by Ticona, Germany) was used for the specimen preparation. The molecular weight of the UHMWPE powder used is 2.6×106 g/mol. UHMWPE powder was used to prepare test piece specimens by using a compression machine and, subsequently, a hot-pressing mould. The compression under pressure of 10, 15 and 20 MPa and the temperature was maintained at 190ºC for 120 minutes. Specimens were cooled in the mould at a cooling rate of 3-4ºC/min. The specimens shape was a rectangular prism 45 mm long, 50 mm wide and 8 mm high. Tensile tests wear performed using a “Instron5582” universal machine. Specimens shape and method according to ASTM 638 (Standard Test Method for Tensile Properties of Plastics). Wear tests were performed using a “MИ-2” abrasive testing machine. Tests were run without lubrication according to ГОСТ 426-77 (Standard Test Method for determination of abrasion resistance under slipping). Specimens shape was in the form of a rectangular prism 10 mm long, 10 mm wide and 8 mm high and using abrasive paper with grit grade of 240 (series 1913 siawat fc made

83

XVII Modern Technique and Technologies 2011 in Switzerland) was fixed on the rotating disc surface, the revolution rate was 40 rpm and under load value of 30 N and specimen is fixed in the holder. Times for testing was 40 minutes and after each test the loss in specimen mass were recorded. The wear volume was computed from the mass loss of the specimen. Results Figure 1 show Mechanical properties of UMWPE specimens were prepared with pressure of 10 MPa, 15 MPa and 20 MPa. The UHMWPE specimen molded in pressure of 20 MPa obtains the highest values of ultimate tensile strength up to 24.2 MPa and elongation up to 313.3 %.

25 24 23 22 21 20 19 18 17 16 15

microstructure of UHMWPE specimens which can be noticed [7].

(a)

350 325 300 275 250 225 200

(b)

10 MPa. 15MPa. 20 MPa. Ultimate tensile strength (MPa) Figure 1. Mechanical properties of UMWPE specimens were prepared with pressure of 10 MPa, 15 MPa and 20 MPa.

Wear volume loss (mm3)

100,00 80,00 60,00

(c)

40,00

UHMWP E_10MPa

20,00

Figure 3. SEM images of UMWPE specimens molded under the pressure of (a) 10 MPa, (b) 15 MPa and (c) 20 MPa.

0,00 0

5

10 15 20 25 30 35 40 Time (min.)

Figure 2. Comparisons of wear volume loss of UHMWPE specimens were prepared with pressure of 10 MPa, 15 MPa and 20 MPa. Figure 2 show wear volume loss of UHMWPE specimens were prepared with pressure of 10 MPa, 15 MPa and 20 MPa. A time for testing was 40 minutes and after test every 5 minutes, the loss in specimen mass were recorded. The wear volume was computed from the mass loss of the specimen. The UHMWPE specimen molded in pressure of 20 MPa obtains the lowest values of wear volume loss. Which mean that the specimen has the highest abrasive wear resistance. And Figure 3a-c SEM images of UMWPE specimens molded under the different pressure. It was found that, the molding pressure has influence on

84

Conclusion The study effect of mold pressure on mechanical properties and abrasive wear resistance of UHMWPE specimens were prepared with pressure of 10 MPa, 15 MPa and 20 MPa. It is found that the molding pressure on UHMWPE has significant influence on mechanical properties [7] and abrasive wear resistance of UHMWPE. The UHMWPE specimen molded in pressure of 20 MPa obtains the highest values of mechanical properties and abrasive wear resistance and the molding pressure has influence on microstructure of UHMWPE specimen. Acknowledgment This research was helped by all officers of Institute of Strength Physics and Materials Sciences SB RAS.

Section VI: Material Science Reference 1. D.S. Xiong, S.R. Ge, Friction and wear properties of UHMWPE/Al2O3 ceramic under different lubricating conditions, Wear 250 (2001) 242–245. 2. C.Z. Liu, J.Q. Wu, J.Q. Li, L.Q. Ren, J. Tong, A.D. Arnell, Tribological behaviours of PA/UHMWPE blend under dry and lubricating condition, Wear 260 (2006) 109–115. 3. Y. Xue, W. Wu, O. Jacobs, B. Schdel, Tribological behaviour of UHMWPE/HDPE blends reinforced with multi-wall carbon nanotubes, Polymer Testing 25 (2006) 221–229. 4. Hsien-Chang Kuo, Ming-Chang Jeng, The influence of injection molding on tribological

characteristics of ultra high molecular weight polyethylene under dry sliding, Wear 268 (2010) 803–810 5. L.M. Brunner and T.A. Tervoort, abrasive wear of Ultra-high molecular weight polyethylene, Encyclopedia of Materials: Science and Technology (2006) 1-8. 6. D. DOWSON, The James Clayton Memorial Lecture 2000, An rdinary meeting of the Institution held at IMechE Headquarters, London, on Wednesday 28 June 2000. 7. Shibo Wang , Shirong Ge, The mechanical property and tribological behavior of UHMWPE: Effect of molding pressure, Wear 263 (2007) 949– 956

STUDY OF FRACTURE PATTERNS OF SPRAYED PROTECTIVE COATINGS AS FUNCTION OF THEIR ADHESION Yussif S.A.K., Alkhimov A.P., Kupriyanov S.N. Scientific adviser: Panin S.V., PhD, professor. Tomsk Polytechnic University, 634034, Russia, Tomsk, Lenina ave, 30 E-mail: [email protected] Introduction In the last few years a number of technologies for spraying protective, hardening and functional coating possessing high operating characteristics were worked out. Analysis of literature testifies that the character of plastic deformation at the mesoscale level in the composition with pronounced plane “coating-substrate” interface is principally determined by a thickness of the coating as well as a value of adhesive strength and also by the relationship between mechanical characteristics of interfaced materials [1]. So during investigation of plastic deformation pattern at mesoscale level in “low-carbon steel – thermal sprayed coating” composition was shown that under loading of specimens with the protective coatings whose ultimate strength is lower then yield strength of a substrate crack nucleation does not occur at the surface but on the outer surface of the coating [1]. It was also shown that at certain thickness of a thermal sprayed coating a failure of the composition occurs by the way of adhesivecohesive cracking. If this takes place the coating retains on the surface of substrate up to high degrees of plastic deformation. Development of such effect was related to a presence of pores in the coating on which an effective relaxation of stress concentrators acting in the tops of cracks propagating in a coating took place. With increasing the thickness of the coating higher than a certain value (150 m=) a cross porosity is reduced that resulted in development of plastic

deformation by way of total adhesive flaking of the coating. One more regularity revealed and described in [1] was establishing the relationship between the value of adhesive strength and the pattern of plastic deformation evolution at mesolevel in the substrate. In particular, it was shown that under low level of adhesive bonding the flaking of the coating can occur only due to the action of shear stresses in the region of the Luders band propagation.. As this take place localization of plastic deformation in the subsurface layer of the substrate happens only in a small zone of forthcoming flaking of the coating that was revealed by the analysis of displacement vector fields. If the value of adhesive strength was increased, for instance by preliminary shot-blasting of a substrate, the flaking of a coating was occurred at the expense of local bend of specimens. The bending was observed in the region of specimens where a top of adhesive crack was revealed. The latter, in its turn, propagated along the interface parallel to the front of Luders band [1]. This work presents results of investigation of plastic deformation processes at the mesolevel in the composition with double-layer gas-dynamic coatings of a various composition having different value of adhesive stress accordingly. Materials and research technique Low carbon steel was used as substrate materials. The coating was sprayed onto both

85

XVII Modern Technique and Technologies 2011 plane faces of specimens by the cold gas-dynamic spraying. A double-layer coatings based on copper, zinc and aluminum were formed by using different deposition conditions. The specimens for testing were dump-bell shaped. The size of working part of specimens was 24,9=2,7=4,9 mm. Test on static uniaxial tension was carried out using the “IMASH 2078” mechanical testing machine at the rate of 0.05 mm/sec. The patterns of plastic flow were investigated with the help of Television - Optical Meter for Surface Characterization “TOMSC” The plastic deformation behavior at mesolevel was studied by the analysis of constructed displacement vector fields of surface patches. Results The analysis of results obtained allowed us to reveal three major way of plastic deformation evolution at mesolevel in investigated compositions that is defined, first of all, by relationship between cohesive and adhesive strength of the coatings. The first scenario of plastic flow development consists in initial cohesive cracking of the coating along the entire length of specimen gauge length followed by its breaking down into fragments with the following plastic deformation localization in the vicinity of the interface and completed by adhesive flaking of the formed coating fragments. The second way of plastic deformation evolution in composition with gas-dynamic coatings is the relay-race cohesive cracking of the coating accompanied by formation of coating fragments between two neighbouring transverse cracks and its further adhesive flaking. The third scenario of plastic flow evolution in the investigated compositions is the complete adhesive flaking of the coating that does not accompanied by its cracking. The pattern of plastic deformation development at the mesoscale level in compositions whose plastic flow is accompanied by coating flaking (because of low adhesive strength) was already investigated in [1] by an example of thermal– sprayed hardening coatings on a base of PG– 10Ni–01 and PG–19Ni–01 powders. This compositions had low ductility and cracking of the coating occurred even below value of applied stress of 100 MPa. The evolution of plastic flow at mesoscale level in compositions investigated in this work develops by the similar way. At the same time the revealing dependence between pattern of continuity disturbance in “coating–substrate” composition and the value of adhesive strength of the coating is the major advantage of the investigations carried out. Let us point out main features which were not described in [1] and which are typical for such compositions. The local bending of the coating is the reason of the primary crack nucleation and spreading in the composition under investigation. It can be

86

stated that plastic deformation in the coating initiates earlier then in the substrate (ductile aluminum, zinc and copper served as a coating material) that results in restraining homogeneous pattern of plastic flow development in the coating by the steel substrate. The latter governs local bending of the coating (it is seen in the displacement vector fields). At the stage of the secondary cracking plastic deformation develops more intensively in the substrate and the fragments of the cracked coating retards homogeneous plastic flow evolution in the first one. As the result the substrate material begins to experience the local bending resulting in the propagation of a secondary crack from interface towards surface of the coating. The pattern of plastic deformation development is traced more completely under deformation evolution by the first scenario. It is possible to state that the local bending precedes the formation of the continuity disturbance at the interface (spreading of the adhesive crack) under substantial value of the adhesive bonding. The latter is accompanied by a vortex motion of material that increases the power of stress concentrator acting at the interface and provides the flaking of the coating fragments. Thus, the results obtained as well as materials of the previous investigations of coated materials allow to reckon that the vortex pattern of plastic deformation development precedes the formation of discontinuity. But under low level of adhesive bonding (3 scenario) the size of region with vortex manner of plastic flow evolution is too small. That is why the latter was not observed on the displacement vector fields with the magnifications used in this work. Under plastic flow evolution by the 2 scenario the value of adhesive bonding was rather high in order to simply flake out the coating from the substrate. At the same time initiation of the transverse cracks in the coating makes it possible the nucleation and spreading adhesive cracks from these cracks along the interface. The process of adhesive crack growth was also intensified by the local bending of the specimen due to restraining the homogeneous plastic flow development in subsurface layer of the substrate. It is possible to contend that quasi–periodic flaking clearly correlating with the thickness of the coating is determined by the local bending of the specimens under crack nucleation in the coating. The latter being a structural notch governs the bending of the specimens but the emergence of a next crack provides bending of the opposite side that “retains a given axis of loading”. Conclusion 1. Depending on the relationships between adhesive and cohesive strengths of gas-dynamic sprayed coatings three major ways of plastic flow

Section VI: Material Science evolution can be revealed in composition under investigation: • The first version consists in initial cohesive (primary) cracking of the coating resulted in its dividing into fragments and followed by further evolution of localized plastic deformation at the vicinity of the interface completed by the adhesive separation of coating fragments. • The second scenario of plastic flow evolution in compositions with the gas-dynamic coatings is the relay-race cohesive cracking of the coating with the consequent formation of coating fragment and its further adhesive flaking. • The third scenario of plastic deformation development in the investigated compositions is the complete adhesive flaking of the coating which does not exert influence on the development of plastic flow in the substrate. 2. Incompatibility of plastic flow evolution in the coating and substrate results in the local bending of specimen which is the major reason of stress concentrator nucleation at the interface. As a result of their relaxation the propagation of transverse

cohesive crack in the coating as well as of adhesive crack along the interface takes place. The secondary cohesive cracking favored the fragmentation of the coating stipulating adhesive flaking process of small size fragments. 3. The presence of gas-dynamic coating restrains the homogeneous plastic deformation development in the substrate as whole causing low pronounciation of strain-induced relief in the subsurface layer at low strains in comparison with the underliying substrate material. With the increasing strain this effect is revealed as the formation of longitudinal “folds” with the extantion of some hundreds microns at the boundary between the subsurface layer and “under–liying” substrate material. References 1. V. A. Klimenov, S. V. Panin, V. P. Bezborodov Investigation of plastic deformation at mesoscale level and fracture of “thermal coating – substrate” composition under tension. Physical mesomechanics. Vol. 2, No. 1-2, p. 141-156: 1999.

COMPOSITION INFLUENCE OF UHMWPE BASED PLASTICS ON WEAR RESISTANCE Ziganshin A.I. Scientific advisor: Kondratuk A.A. Linguistic advisor: Demchenko V. N. Tomsk Polytechnic University, 634050, Russia, Tomsk, 30 Lenin st. E-mail: [email protected] Developing constructions and separate parts of machines require knowledge of contacting surfaces mass loss. It’s all the matter of abrasive micro particles, which inevitably deposit on the surface. Nowadays, there are many investigations of new materials on UHMWPE base with different fillers. That’s why, it’s very important to study the dependence of wear resistance on the composition. UHMWPE is polyethylene with mass about 1,5*106 g/mol. This fact defines its unique mechanical and physical properties, making it different from other polyethylene trades. Specific properties determine its application areas. UHMWPE is used, when simple polyethylene and other thermoplastics can’t withstand hard exploitation facilities. High impact elasticity, chemical, corrosion and wear resistance defines a wide range of application for durable parts. [1] Due to low friction coefficient, heat generation friction is decreased to minimum. Such parts don’t require lubrication for the maintenance. Creation of UHMWPE based composite materials allows

increase characteristics of polymer materials and expends areas of their application. The concept of fracture depends on matrix properties, contents of particles and its adhesion with matrix. The increase of filler content can lead to the change fracture mechanism from plastic to brittle. Thermoplastics, such as polyamide, polyformaldehyde, polycarbonate, are often used, as polymer matrix for wear resistant materials. These materials are used to manufacture parts by casting under pressure, extrusion or heat pressure, so they are very useful for serial production. Disperse powders with laminate crystalline lattice are widely used, as anti-friction fillers. For instance, there are graphite, boron nitride and disperse powders of nonferrous metals, such as copper. Fluoroplastic-4, polyethylene wax and liquid anti-friction fillers are also used as organic products. They can be used in combination as well. Fillers content is 1 – 15%. The content increase can worsen properties. [2] For mass loss research of UHMWPE based polymer materials, the authors fabricated flat

87

XVII Modern Technique and Technologies 2011

1,2 1

wear Δm, g

cylindrical specimens of different composites, where disperse copper and boron nitride were used, as fillers. There were following compositions: UHMWPE “TNHK” and UHMWPE “Ticona” without fillers, UHMWPE “TNHK” + 3% Cu, UHMWPE “TNHK” + 7% Cu, UHMWPE “TNHK” + 10% Cu, UHMWPE “TNHK” + 13% Cu, UHMWPE “TNHK” + 50% Cu, UHMWPE “TNHK” + 50% Cu( after heat treatment(150ºC)), UHMWPE “TNHK” + 81% Cu, UHMWPE “TNHK” + 81% Cu( after heat treatment(150ºC)), UHMWPE “TNHK” + 3% BN, UHMWPE “TNHK” + 7% BN, UHMWPE “TNHK” + 10% BN, UHMWPE “TNHK” + 13% BN. Wear resistance research was carried out with help of the device IIP-1 in dry abrasive wear condition with free moving particles on steel surface. The research objects were cylindrical form specimens with following dimensions: height = 10 mm, diameter = 15 mm. The basic wear evaluation method was the mass measurement loss with special scales TYP WA – 33 with 0,00005 grams accuracy. Measurement period was 90 minutes. Graphs 1-4 were drawn according to the experiment results. Previously, mass loss of “TNHK” and “Ticona” was estimated (fig. 1).

0,8 0,6 0,4 0,2 0

0

50

100

time t, min BN=3% BN=10%

BN=7% BN=13%

Figure 2. “TNHK” + BN (3, 7, 10, 13%) Moreover, the effect of high temperature influence on the matrix destruction was researched (fig. 3,4).

0,025 0,06 0,02

0,015 0,04 wear Δm, g

wear Δm, g

0,05

0,01

0,005

0,03 0,02

0

0

50

100

0,01

time t, min 0 TNHK

Ticona

0

50

100

time t, min Figure 1. “TNHK” and “Ticona” During 60 minutes there was no difference between wears, but after 90 minutes loss of “TNHK” mass was greater, than “Ticona” one. Analysis of obtained results for powder copper specimens allow make the conclusion: increasing of filler amount leads to reduction of wear resistance, except specimen with 13% of copper (fig 3). As for specimens with boron nitride, there is wear increase in 3 – 7% of filler, and decrease in 10 – 13% (fig 2).

88

Cu=3% Cu=10% Cu=50%

Cu=7% Cu=13% Cu=50%h/t

Figure 3. “TNHK” + Cu (3, 7, 10, 13, 50%)

Section VI: Material Science Results are ambiguous, nevertheless they allow conclude, that wear of specimen after heat treatment is higher, in comparison with initial.

0,6

References 1. Ultra high molecular weight polyethylene of high density/ Ed. IN Andreeva, EV Veselovskaya, EI Nalivaiko and Publishing Chemistry. 1982. 2. Technical properties of polymeric materials:/ VK. Kryzhanovsky, VV Burla, AD Panimatchenko, Y. Kryzhanovskaya. "Profession", 2003. - 240.

0,5

wear Δm, g

0,4 0,3 0,2 0,1 0 0

50

100

time t, min Cu=81%

Cu=81%h /t

Figure 4. “TNHK” + Cu (81%)

89

XVII Modern Technique and Technologies 2011

90

Section VII: Informatics and Control in Engineering Systems

Section VII

INFORMATICS AND CONTROL IN ENGINEERING SYSTEMS

91

XVII Modern Technique and Technologies 2011

SIMULATION PROCESS PROCEEDING IN THE ELECTROLYZER FOR FLUORINE PRODUCTION FOR COMPUTER SIMULATOR FOR OPERATOR OF TECHNOLOGICAL PROCESS Belaynin A.V., Denisevich A.A., Nagaitseva O.V. Supervisor: Nagaitseva O.V., assistant Ermakova Ya.V., teacher Tomsk Polytechnic University, 634050, Russia, Tomsk, 30 Lenin str. E-mail: [email protected] Computer simulator is developed for training the operating personnel safe and effective methods to control electrolyzer for fluorine production in the workplace and different emergencies. Electrolyzer’s structure is given in [1]. The key element of the computer simulator is a production simulator model which includes several interconnected elements. The model of the processes proceeding in the electrolyzer which formed the basis of the technological scheme of fluorine production is the basic one. This article presents the results of creating of mathematical models of technological process in the electolyzer for fluorine production, in the range of HF concentration– 38-42 % and electrolyte temperature – 368-378 K. It’s conformed to the normal mode. Based on the previous models there was obtained a new mathematical formulation of process model [2] for the development of which cell modeling was used. In accordance with it, the volume of the apparatus was divided into three zones (with numbers 0,1,2). These zones are described by the set of lumped parameters (concentration of hydrogen fluoride, mass, temperature and electrolyte conductivity). Zone 0 includes the central section of the apparatus and part of heat exchanger. The central section does not contain cathode cells, but hydrogen fluoride (HF), the measurement electrolyte temperature and HF concentration are realized with its help. Zones 1 and 2 include two sections and part of heat exchanger. The fist zone states left central zone and the second – right one. Electrolyte hydrodynamics is described by the model of ideal shuffle in each zone. And at the same time electrolyte is considered to be a singlephase incompressible fluid medium. The effect of the gas phase formed as result of electrolysis is ignored. Process of heat and mass transfer between zones are agreed by natural electrolyte circulation in the volume of the appratus [5] and are described by the flow G. It is supposed that the injected gaseous hydrogen fluoride passes into the electrolyte immediately. The load current is equal to the sum of current flowing through each section, as sections in electric circuit are connected in parallel. Then the current of first and second zones will be calculated: I1 = k I ⋅ I , I 2 = (1 − k I ) ⋅ I (1)

92

k where I – irregularity coefficient of current distribution in the zones is determined on the basis of statistical processing of data given by the operating industrial electrolyzer. Composition of the electrolyte is changed due to the consumption of HF for the hydrogen and fluorine formation, HF evaporation from electrolyte surface into space under electrolyzer cover and HF supply for its compensation. This process occurs in the process of fluorine production in the electrolyzer. Loss of electrolyte because of its removing with eletrolyzer products are considered to be insignificant. According to it, the material balance of HF for each zone can be calculated by the system of equations (2):  0 m  m k 

0 2 dCHF 0 k = GHF + G ∑ CHF − (GHF + 2G )CHF − Gu0 dt k =1 k dCHF G 0 k = (G + HF )CHF − GCHF − GIk − Guk , k = 1, 2 dt 2 (2)

In accordance with the overall reaction (1) and Faraday’s law, HF mass flow needed for the hydrogen and fluorine formation in each zone can be calculated by the relation (3): M GIk = 2 ⋅ HF ⋅ keF ⋅ I k M F2 (3) M HF , M F2 where – molar mass of HF and F2,

k

respectively, eF – electrochemical equivalent of fluorine. In this case, HF is consumed only in the first and the second zones, as cathode cells are placed there. Consumption of HF, evaporating from the electrolyte surface into space under the electrolyzer cover for the k- zone is calculated by the relation: k k Guk = GAS + GCS (4) k k GAS GCS , – mass flow of HF, where evaporating into anode and cathode space under electrolyzer cover respectively. According to the experimental data, the average content of HF in the fluorine product makes up 6% in operating temperature range and concentration of HF. So in concordance with Faraday’s law and explanation given above we can (G АП ), estimate the total mass flow of HF

Section VII: Informatics and Control in Engineering Systems evaporating from the electrolyte surface into anode space under the electrolyzer cover by the relation: 6 GАП = ⋅ kэF ⋅ I 94 (5) st nd Since the 1 and the 2 zones are structurally identical, and zone 0 doesn’t have anode space,

G AS is divided into two equal parts between the first and the second zones and for zero zone it equals to zero. Consumption of HF evaporating into the cathode space can be estimated as S AS GAS = SCS GCS , (6) S AS , SCS – surface area of all anode and where cathode space under the electolyzer cover. Then consumption of HF for k-th zone evaporating into the cathode space is defined by the following expression: k GCS =

k SCS GAS S AS ,

(7)

k SCS

where – surface area of the cathode space in k-th zone. In describing the heat exchange process, electrolyzer is represented by the system which receives heat from the electric current (Joule heat) and which gives off heat trough the heat exchanger, outer walls of the shell and with removing products. The value of heat consumption from the evaporation of the electrolyte and other factors is negligible and therefore it is ignored. On the basis of hydrodynamics band model of electrolyte it is considered that heat from the electric current is released and with output product it is removed only in the first and second zones. Zones are exchanged with heat by means of the circulation flow G. According to this, heat balance through the flow of electrolyte can be described by the following equations (8): 2  dQe0 0 = QHF + ∑ QGk − − 2QG0 + − QH0 − Qen   dt k =1  k  dQe = Q k + Q 0 − Q k − Q k − Q k − Q k , k = 1, 2 G+ G− en el H G   dt k k k k where k – number of zone, Qe , Q H ,QG , Qen , Q HF , Q0G + QGk − , Qelk , - heat, contained in the electrolyte, carried away by the heat exchanger, flue gases, the environment, brought in by HF, direct flow and reversed flows between zones and Joule heat, respectively. Temperature of cooling water at the output of each zone is determined by the equation (9):  dTw0 Gw0 dTw0 + ) = K H πD (Te0 − Tw0 ) ρ w cw ( S dt 3 dl  k k k ρ c (2,5S dTV + Gw dTw ) = K 2,5πD (T k − T k ) H e w  w w dt 2 dl

As a hydrodynamic model of the cooling water flow the model of ideal displacement is accepted. The total voltage drop in the electrolytic cell is determined by the following expression: U = Ed + Eel + ∆E + Eelec (10) Theoretical decomposition voltage (Ed) for the reaction in the temperature range from 263 K to 383 is 2.92 V on the average, changing the value in no more than 0.01 in [4]. Voltage drop in the electrolyte (Eel) can be defined as follows: R1 ⋅ R 2 Rel = 1el el2 Rel + Rel Eel = I ⋅ Rel , (11) 1 2 R R R el , el , el where – resistance of the electrolyte in interelectrode space is combined, in the first and second areas of modeling, respectively. Electrolyte resistance in k-th zone: 1 l Relk = k ⋅ k σel Sel , (12) where l – length between electrode (equal for k k both zones), Sel , σel – average cross section area of the electrolyte between electrodes, and electrolyte conductivity in k-th zone. The value of electrolyte conductivity is calculated by the empirical dependence which is showed in [5]. Theoretically, the total polarization can be determined by the laws derived from the Tafel equation [3].

E

Value elec includes the value of the voltage drop in the electrodes and contacts, and is about 0.05 V, it is calculated in [2]. Preliminary assessment of qualitative operation of the model was carried out in Matlab, which showed its efficiency. In future, it will be a detailed study of static and dynamic model adequacy using data on the operation of workable apparatus. References 1. Нагайцева О.В., Ливенцова Н.В., Ливенцов С.Н. // Изв. ТПУ. Управление, вычислительная техника и информатика. – 2009 – Т. 315. – №5. С. 89 – 93. 2. Ливенцова Н.В. Система автоматизированного управления среднетемпературным электролизером производства фтора // Дис. Канд. Техн. Наук: 05.13.06. – ТПУ, 2008. – 199 с. 3. Багоцкий В.С. Основы электрохимии. – М.: Химия, 1988. – 400 с. 4. Галкин Н.П., Крутиков А.Б. Технология фтора. – М.: Атомиздат, 1968. – 188 с. 5. Химия фтора. Ч.1. // Под ред. И.Л. Кнунянца: Пер. с англ. – М.: Иностр. литер, 1948. – 248 с.

93

XVII Modern Technique and Technologies 2011

EXPLICIT LOOK AT GOOGLE ANDROID Bobkova A.N., Chesnokova A.A. Language supervisor: Pichugova I.L., senior teacher Tomsk Polytechnic University, 30, Lenin Avenue, Tomsk, 634050, Russia E-mail: [email protected] Introduction In 2007 rumors about Google’s intentions of competing with Apple's iPhone started to circulate. This news interested a lot of people and of course raised many questions. Would Google get into the hardware business? Would the company rely on established cell phone manufacturers for hardware? Would Google simply concentrate on building smartphone applications for other devices like the iPhone? And only by the year 2008 it became clear that the Google were getting into the handset software business with the mobile operating system (OS) called Android. Android OS was released to work on phones built by different manufacturers without providing any single service provider with exclusive rights for using this platform. In that respect, Android joins other mobile device operating systems like Symbian and Windows Mobile. An important factor that sets Android apart from most other mobile operating systems is that it is based on an open source platform. That means Google allows anyone to look at and modify most of Android's source code. Ideally, this would mean that if a developer felt Android needed a specific feature or capability, he could build it and incorporate into the OS. The software would constantly evolve. Google Android Architecture Google usually refers to the Android OS as [3] a software stack for mobile devices. Each layer of the stack groups together several programs supporting specific operating system functions. The base of the stack is the kernel. Google used the Linux version 2.6 OS to build Android's kernel, which includes memory management programs, security settings, power management [1] software and several hardware drivers. For example, Android kernel includes a camera driver allowing the user to send commands to the camera hardware. The next level of software stack includes Android's libraries representing sets of instructions that tell the device how to handle different kinds of [1] data. For example, the media framework library supports playback and recording of various audio, video and picture formats. Located on the same level as the libraries layer, the Android runtime layer includes a set of core Java libraries and Dalvik virtual machine. Each Android application runs within an instance of the Dalvik VM, which in turn resides within a Linux[2] kernel managed process , as shown in Figure 1.

94

That is important because applications will not be interdependent and if any application running on the device crashes, others will not be affected. Linux Kernel Linux process Dalvik Virtual Machine Android application

Fig. 1. Application’s structure The next layer is the application framework. By providing an open development platform, Android offers developers the ability to build extremely rich and innovative applications. They are free to take advantage of the device hardware, access location information, run background services, set alarms, add notifications to the status bar, and much, much more. Moreover, the application architecture is [3] designed to simplify the reuse of components . It means that any application can publish its capabilities and any other application may then make use of those capabilities. At the top of the stack there are the applications themselves. Nowadays, it is not enough for a Smartphone to be able to make phone calls and check e-mail as well as surf the Web. You need to have a host of useful, fun, productive and even pointless entertaining applications at your disposal. Android’s strong app library can excite customers. If you're an average user, this is the layer you'll use most with the help of user interface. Only Google programmers, application developers and hardware manufacturers access the other layers down the stack. Building Android Applications In order to build an Android application, a developer has to be familiar with the Java programming language. If he is, he should download software developer kit (SDK) and the Eclipse IDE and get started. Coding in the Java language within Eclipse is very intuitive, because Eclipse provides rich Java environment including context-sensitive help and code suggestion hints. The SDK gives the developer access to Android's application programming interface (API) and includes several tools embracing sample applications and a phone emulator, which imitates functions of a phone running on the Android platform. Using such program the developer can test his application while building it.

Section VII: Informatics and Control in Engineering Systems Google cares about its developers by providing generous support for them. It includes different tutorials on Android developer Web site and tips on basic programming steps like testing and debugging software. Google even provides stepby-step instructions on how to build an application named Hello World, which is a usual start point for almost every programmer in learning almost every programming language. Another feature of Android is multi-tasking support. As it was mentioned, this feature is possible due to Dalvik virtual machine. So, Android developers can create complex applications that run not only in the foreground but also in the background of other applications. Each Android application can have (but it is not obligatory) four basic building blocks called application components. Each of them serves a distinct purpose and has a distinct lifecycle that defines how the component is created and destroyed: • Activities. An activity represents a single screen with a user interface. For example, a map application could have a basic map screen, a trip planner screen and a route overlay screen. That is three activities. • Intents. Intent is the mechanism aimed at moving from one activity to another. Android also permits broadcast intent receivers which are intents triggered by external events like moving to [2] a new location or an incoming phone call. • Services. A service is a component that runs in the background to perform long-running [2] operations. A service does not provide a user interface. For example, a service might play music in the background while the user is in different application, or it might fetch data over the network without blocking user interaction with an activity. • Content providers. A content provider allows an application to share information with other applications. For example, the Android system provides a content provider that manages the user's contact information. Therefore any application with the proper permissions can query part of the content provider to read and write [2] information about a particular person. An Android application is composed of more than just code – it requires resources that are separate from the source code, such as images, audio files, and anything relating to the visual presentation of the application. For example, you should define animations, menus, styles, colors, and the layout of activity user interfaces with XML files. This approach makes it easy to update “appearance” of your application without modifying the code. Furthermore, it enables you to optimize your application for a variety of device configurations (such as different languages and screen sizes). So, developers must keep a lot of different considerations in mind while building Android applications.In addition to unveiled above

Android’s internal content, there are some facts that are probably of some interest. Interesting Facts about Android 1) History. Android OS was not the brainchild of Google. It was devised in 2003 by tiny startup company Android Inc and sold to Google for $50 million in 2005. At the time of the acquisition, as nothing was known about the work of Android Inc., some guessed that Google was planning to enter the mobile phone market. Google put the concept in cold storage until several companies, including Google, HTC, Motorola, Intel, Qualcomm, Sprint Nextel, T-Mobile, and NVIDIA, came together to form the Open Handset Alliance at the end of 2007. They stated their goal for developing open standards for mobile devices, and unveiled [3] Android. 2) Updates. Like all software and operating systems Android gets regular updates which are quite intriguingly named using words associated with pastries and pastry baking moving forwards in alphabetical order: 1.5 – Cupcake, 1.6 – Donut, 2.0 / 2.1 – Éclair, 2.2 – Froyo, 2.3 / 2.4 – Gingerbread, 3.0 – Honeycomb, possible mid-2011 release – Ice [3] Cream Sandwich. 3) Enormous code length. The Android OS is made up of over 12 million lines of code, which includes 3 million lines of XML, 2.8 million lines of C, 2.1 million lines of Java, and 1.75 million lines of [3] C++. 4) Developer Challenge. The Android Developer Challenge (ADC) was launched by Google in 2008, with the aim of providing awards for high-quality mobile applications built on the Android platform. There are 10 speciallydesignated ADC 2 categories, which developers submit their apps to and for which Google offers [3] prizes totaling 10 million dollars. Conclusion Android is hitting the today’s market and properly competing with the most famous operating system for mobile devises Apple's iPhone. And the reasons of Android’s enhancing popularity are its considerable advantages. First of all, Google do not try to save Android’s code from curios developers. Moreover, their curiosity is encouraged with comprehensive support and excited challenges. In addition, Android supports various modern multimedia formats. So, a large number of opportunities, freedom and support for different phone models ranging from very simple and cheap to expensive and filled with high-end stuff, these all are fairly raising Android to the first place in the mobile world. References 1. Yudin M. Google Android Architecture. [Electronic resource] Access mode: http://www.realcoding.net/article/view/4767

95

XVII Modern Technique and Technologies 2011 2. Programming Basis for Android Platform. [Electronic resource] Access mode: http://softandroid.ru/articles/razrabotka/2363article.html

3. Android Operating System. [Electronic resource] Access mode: http://en.wikipedia.org/wiki/Android_(operating system)

THE DEVELOPMENT OF WEB-APPLICATION FOR CLASSIFIER REPRESENTATION OF INTEGRAL CLASSIFIER SYSTEM Ksenia Fedorova Scientific adviser: S.V. Axyonov, M.V. Yurova Tomsk Polytechnic University, Russia, Tomsk city, Lenin Street, 30 E-mail: [email protected] • review the description and list of classifiers values; • view the hierarchy of existing classifications.  for normal users: • review the description and list of classifiers values; • view the hierarchy of existing classifiers; • change the password.

Introduction Classifiers are the most important factor in the backbone of informatization and the basis of a common language for presenting information. Their role in the implementation of integrated information environment is particularly important. Classifiers should be used for: 1. definite presentation of information which facilitates interaction with external systems; 2. input information by selecting a value from the list. Due to the necessity of the work with classifiers, we must be able to view the existing classifiers, information and a list of values, as well as edit existing data. The aim of this work is to create exactly the web-application for displaying the qualifiers of USK (Unified System Classifier). Development of such application is necessary because there are problems of local application:  it’s difficult to scale solutions;  the need for significant processing power and storage space at each workplace;  high administration costs of the client side, which are growing exponentially and depending on the number of workplaces. In order to achieve this goal we should solve the following tasks: 1. to explore the principles of building the unified information environment of the university and integral classifier system; 2. to analyze the existing application for classifiers maintenance; 3. to develop an application.

Authorization When the application is loading the user should enter a login and password. After clicking on button "Login" the user authentication with help of the function my_auth (p_username in VARCHAR2, p_password in VARCHAR2). This function returns a value of «true», if the input data correspond to stored information in the table «APP_USERS». If the function returns a value of «false», the user is invited to re-enter the data. Table «APP_USERS» includes the attribute "Administrator Rights", which contains «Y» or «N» values. You can assign rights to users for viewing, creating or editing information about the classifiers with help of this attribute.

Functions description Developed application allows making the following steps:  for a user with administrator privileges: • create and edit the description of the classifier; • maintain a table of access rights;

Fig. 1. Type of the main page for the administrator

96

Configuring and Administering Applications After successful authentication the user is referred to the home page, which contains the main parts of the application.

If the user has administrator rights, he has got an access to the page of creating and editing

Section VII: Informatics and Control in Engineering Systems application users, where he can create, delete and edit data of any user's web-application.

Fig. 2. Page "User" for the administrator If the user has limited rights, then he gets the opportunity to change his password and write a letter to the developer of web-application. View and create descriptions of the classifier When you click on "Classifiers", we come to the review page of classifier description. A user with limited privileges can only look through the descriptions of the classifiers and search for some fields. The user can also change the number of records which are displayed on the page by choosing from the list. As opposed to a regular user administrator can edit, delete and create classifiers descriptions.

Fig.3. Page "Classifiers" for the administrator The change of the description occurs when you click on the icon editor, which is in the first column. After clicking on the icon the page "Change classifier" gets opened. On this page you can change the values of certain fields, whereas the values in the "inactive" field are inserted automatically by triggers. Also, the description of the classifier can be removed. Creating of classifier occurs after clicking on "Create a classifier" on the page "Classifiers". Then the page "Add a description" gets opened. The user can fill in the appropriate fields and press "Next" button, or go back to the list of classifiers by clicking on "Cancel".

Fig.4. Steps of "Create of classifier" process When the user presses the "Next" button, page of the "Create Table" opens, at the same time procedure "insert_standart_column" is launched. This procedure contains SQL-query which adds in a table «TEMP_ATTRIBUT» data about the standard attributes of the classifier: date created, date of entry into the archive, date modified, record status, the user who created the record and the user who changed the record. When the page "Create Table" is opened the user can create new attributes by clicking on the button "Add Attribute". Next, the user is welcome to create a primary key. On default, the system does not create a primary key, but if it is necessary, the user must select the item "Generation of a sequence of values." When choosing this item the user must enter the names of constraint primary key and sequence, as well as choose prime attribute from a list of inserted attributes on the previous page and then click "Next". On the next page, the user can add foreign keys. To do this he must enter the name of the constraint foreign key, select the attribute which will be a foreign key, the referencing table and the referencing attribute. After filling in all these fields, the user must click on the "Add" button, and input values are displayed below, and will also be inserted into the table "TEMP_FKEY" which contains data about foreign keys. After adding the foreign keys, the user can add unique keys. On the page "Unique key" such fields as "Name of constraint unique key" and "Key attribute" are available for filling. After clicking on "Add" entered values are displayed below and they will be inserted into the table «TEMP_UNIQUE». After clicking on "Next" on page "Unique key" user must confirm the creation of classifier descriptions, table and all keys. When you click on "Finish", procedure "insert_into_all_tables" is called, also all previous completed fields are cleaned; the data from tables «TEMP_ATTRIBUT», «TEMP_FKEY» and «TEMP_UNIQUE» is removed and page the "Classifications" opens.

97

XVII Modern Technique and Technologies 2011 View the hierarchy of classifiers After clicking on the "Hierarchy of qualifiers", the user can look through the hierarchy of classifiers on the main page. This page is available for both common user and administrator. Conclusion Developed web-based application allows you to create, edit and delete the description of classifiers, view lists of classifier values, implement dividing access rights to information. In future we are planning to improve web-based applications: adding the user authorization scheme

that is needed for a clearer allocation of access rights, the ability to create, edit and delete classifier values. References 1. Положение о единой системе классификации и кодирования информации Томского политехнического университета. 2. Feuerstein, Steven. Oracle PL/SQL Programming. 2009, - 1300 p. 3. Greenwald, Rick. Beginning Oracle Application express. 2009. – 386 p.

WORKING OUT THE PENDULUM DESIGN AND ALGORITHM INVERTING ON THE BASIS OF LABORATORY STAND TP-802 OF FIRM FESTO Fedorov V.A., Kondratenko M.A., Pastyhova E. А. Scientific leader: Fomin V.V., Ph. D., associate professor Tyumen State Oil and Gas University, Volodarskogo st., 38, Tyumen, 625000, Russia. E-mail: [email protected] In this work problem of working out is algorithm of management for inverting of the physical pendulum, which it is fix on a mobile support – the carriage of an electromechanical drive. This drive is operated by step motor. For its decision the mathematical model of system is made and analyzed, the algorithm of management is realized in practice [1]. As a result practical realization of pendulum inverting is received on the basis of equipment of laboratory stand Festo TP-802 [2]. In the course of research and working out we used program application Festo WinPisa 4.41 and hardware maintenance: the positioning controller SPC200 [3], the step motor EMMS-ST, the motor’s controller SEC-ST and electromechanical linear belt-driven actuator DGE [4]. All equipment is manufactured by firm Festo. For the set characteristics (table 1) of laboratory stand Festo TP-802 it was necessary to carry out following problems: Table 1 - The Basic characteristics of the stand Festo TP-802 Range movings max, m 0,3

of S

Max. speed V max, m/s

Max. acceleration 2 a max, m/s

Weight of the carriage M, kg

0,7

4

0,45

а) To calculate and realize a pendulum design, having defined length and weight depending on characteristics of the stand and requirements to the maximum angle of rotation;

98

б) To develop transfer algorithm of pendulum a condition from the bottom steady position into the top unstable position of balance; в) To develop the code of the program realizing received management algorithm of a pendulum on a mobile support. For pendulum transfer in the inverted condition at available characteristics of the laboratory stand Festo (table 1) it was necessary to define the constructional decision of pendulum model. By results of experiments (table 2) pendulum parameters (him length l and mass of weight m) at which pendulum inverting into the top unstable position of balance occurs for the minimum quantity of courses of the carriage: − length pendulum l=0,3 [m], − mass of weight m=0,028 [kg]. Table 2 - Results of laboratory experiments by definition of parameters of a pendulum n, l, м m, кг V, м/с S, м θ, º раз 0,2 0,014 0,7 0,2 ≥ 180 11 0,028 0,7 0,2 ≥ 180 10 0,042 0,7 0,2 ≥ 180 9 0,3 0,014 0,7 0,29 ≥ 180 8 0,014 0,7 0,2 ≤ 60 0,014 0,5 0,29 ≥ 180 8 0,028 0,5 0,29 ≥ 180 5 0,028 0,5 0,2 ≤ 60 0,042 0,7 0,29 ≥ 180 6 0,042 0,7 0,2 ≤ 60 0,1 0,7 0,29 ≤ 45 0,4 0,014 0,7 0,29 ≤ 145 0,014 0,7 0,2 ≤ 60 -

Section VII: Informatics and Control in Engineering Systems where l – length pendulum; m – mass of weight; V – speed of moving of carriage; S – moving of carriage; θ – angle of rotation of pendulum; n – amount of motions carriage.

L

Θ final value

U

V

a

Laboratory stand Festo TP-802 is added by a pendulum construction, with the selected parameters of length l and masses m, (fig. 1). Construction of pendulum consists of: − aluminium wire, d = 4 [mm], − plastic wheel, d = 50 [mm], − bolt, size 5×40 [mm], − washers and nuts, M5, − putty.

Figure 1 - Construction of pendulum and Laboratory stand Festo TP-802 At development of management algorithm it is necessary to take into account terms, effluent from descriptions of equipment: − transfer of a pendulum from the bottom steady position into the top unstable should be realized only at the expense of moving of the carriage within the limited distance on one axis under zero entry conditions (a angle of a deviation of a pendulum, speed of the carriage, its moving); - management of the executive mechanism – the carriage - is realized by the P-management principle (a programmed control principle). Pmanagement presence limits possibilities of functioning of the given system, because the stabilization problem becomes unrealizable. The programmed control principle of system work consists that being set by a certain angle which is necessary for reaching (θ final value), create the applied program (U), parameters for which are calculated according to mathematical model (x, V, a). Then on the basis of this program the controller SPC200 develops operating influence (F), which by means of interaction of structures of system, actuates the carriage. At the expense of carriage moving there is a deviation of a pendulum from a vertical axis and its further swing. As result by which it is possible to measure by means of the special equipment, on an exit after each movement of the carriage we will have moving, speed, acceleration of the carriage and a deviation corner, speed, pendulum acceleration. The system function circuit is resulted on figure 2.

m

X

Festo TP802

F

Model of pendulum

x x& &x& θ

θ& θ&&

Figure 2 - Function chart of management of a pendulum on a mobile support with P-management (---- - with С-management) On figure 3 the developed algorithm of work of system is resulted. In the beginning of work of system the code of the applied program is loaded into the controller, and also the necessary information on options of an axis of positioning, also carriage positioning in an initial position is made. The carriage of an electromechanical actuator according to operating influence makes the quantity of movements set by the program, (n-k)-time forward - back. This period of movement of the carriage corresponds to the period of increase in fluctuations of a pendulum – its swing to a angle θ≥150º. Then carriage movements follow – k-time, for which the pendulum reaches an angle θ=180º – it is stabilized in the top unstable position of balance. This position a pendulum keeps during of several seconds, and then under the influence of external and internal forces begins movement to the bottom position of balance. At this moment program switching on stabilizing management is necessary. In the given work the stabilization problem wasn't put from for absence of feedback, which it measure off state of variables θ and s. Start 1.

Adjustment of equipment

2.

Formation of operating influence

3. Moving of carriage n time Deviation of pendulum

Stop

99

XVII Modern Technique and Technologies 2011 Figure 3 - Algorithm of work of a control system of a pendulum on a mobile support (The block 3 carried out n time) Result of determination of a control object (creation and the mathematical model analysis, in this case) and control techniques will be the code of the application program implementing the received control algorithm by a pendulum with a mobile support. It contains the information on relocation of the carriage on certain distance with certain speed and the acceleration, written down on language of the assembler of an appropriate program application.

Literature sources: 1. Astrom K.J., Block D.J., Spong M.W. The Reaction Wheel Pendulum. Morgan and Claypool, 2007. – 112 p. 2. Festo AG & Co. KG. // Автоматизация производства [Электронный ресурс]. – 2008. Режим доступа: http://www.festo.ru. 3. SPC200 Smart Positioning Controller. WinPISA software package. Festo AG & Co. KG, 2005. - 381 p. 4. Positioning system. Smart Positioning Controller SPC200. Manual. Festo AG & Co. KG, Dept. KI-TD. 2005. 371 p.

STATISTICAL METHODS IN EVALUATING MARKETING CAMPAIGN EFFECTIVENESS Garanina N.A.. Supervised by: Berestneva O.G. Tomsk Polytechnic University, 634050, Russia, Tomsk, Lenina str., 30 E-mail: [email protected] Advertising is a kind of investment in your own profit. And therefore it should be planned carefully. But in fact companies do not use any applications for planning it, they simply test advertising and then launch it on the media. This is a consequence of the fact that marketing managers usually do not know how to use different mathematical applications which can help them to find the most optimal way of budget management. And nowadays companies lose their money but no one can tell them for sure what had gone wrong… This article shows the way how marketing managers can use regression analysis in Microsoft Excel and therefore change their situation for the better. Maybe Microsoft Excel usually performs ordinary arithmetic operations. But there are some additional packages (e.g. statistical analysis package) which can be used for solving some difficult marketing tasks. How can people understand whether this or that marketing campaign is going to be successful or not? Everyone has one’s own answer to this question. In my opinion, the more people know about our brand and our company on the whole, the more effective was our campaign. I guess it is one of the rare situations when such categories as QUALITY (of the marketing campaign) and QUANTITY (of the people informed) are in direct dependence from each other. And we should not forget that effective advertising should have optimal cost. We should analyze marketing data to optimize our costs and find out the best plan for marketing campaign. Besides we can analyze this data in two

100

different ways: formal (by using statistical and econometrical methods) and informal (by using only qualitative assessment). And preference should be given to the formal one, even in spite of the fact that it is very time-consuming because there are a lot of difficulties connected with simulation of marketing situations: complexity and unpredictability of the object, nonlinearity of marketing processes, instability of marketing linkages, the complexity of measuring marketing variables, etc. However, these methods have a high degree of accuracy and objectivity, which can not be said about the informal. In this evaluation of marketing effectiveness I use pair regression analysis. Such kind of analysis is used when a researcher wants to know how one variable affects the other. Let’s analyze the activity of one company that mainly produces and sells the product X. The company organizes regular promotions to familiarize consumers with product X and that, of course, affects the level of sales. After analyzing the time series of sales and money spent on advertising, we can obtain: • correlation between advertising costs and sales levels; • econometric model that relates sales to advertising costs; • some characteristics of the influence of advertising on sales. Problem: there are two time series (advertising costs and sales). Their values were recorded every month during the year.

Section VII: Informatics and Control in Engineering Systems The task is to define the relationship between advertising costs and sales volume and make recommendations. Then make a prediction of what sales company would have at a cost of 30 and 60 units. The influence of the other factors should not be taken into account. On the first stage of solving the problem it is necessary to make a plot in «advertising costs» «sales volume» coordinates and evaluate the link form between the studied parameters (figure 1).

Fig.1.Plot in coordinates «advertising costs» «sales volume» So, the optimal cost of advertising is approximately equal to the value of 40 units, because the chart shows a rapid growth in sales volume, which is slower at the advertising costs more than 40 units. (Here “unit” is a kind of notional value.) On the second stage researcher should construct a polynomial trend of the second degree by means of Microsoft Excel. The choice of the most suitable function which characterizes the trend is realized more often empirically by constructing a series of functions and comparing them with each other in 2 determination coefficient value (R , where R is a correlation coefficient). Trend parameters are determined by the least squares method. In this case determination coefficient is equal to 0,86 for linear function and 0,97 for polynomial function, which means that it is better to use the polynomial one for further analysis as it has a bigger determination coefficient value. In practice, most often people use the following functions: • at even development – linear function

y ( x) = a0 + a1 * x

• at accelerated growth: а) square parabola

y ( x ) = a 0 + a1 * x + a 2 * x 2

Fig.2. Polynomial trend b) cubic parabola

y ( x) = a 0 + a1 * x + a 2 * x 2 + a3 * x 3

• at constant rates of growth – exponential function; • at reduction with slowdown – hyperbolic function. On the third stage researcher should assess the adequacy of the obtained regression model by testing the statistical significance of regression parameters and the regression equation as a whole. To do this it is necessary to make a full calculation by using statistical analysis tools. Let’s use the built-in “Regression” procedure of Microsoft Excel; but firstly reconstruct the original data in such a way as to create another column in which the square of the advertising costs’ value should be written. Statistical significance of the regression equation is valued by “F value”) parameter, which should be less than or equal to 0,05 if the equation is statistically significant. In our case it is less than its critical value and therefore the obtained equation is statistically significant. Then researcher should check the statistical significance of the regression parameters by checking the “P value”. Check should be performed for each regression coefficient. If its estimation value is less than or equal to 0,05 (its significance level) then this regression coefficient is considered statistically significant. Otherwise it is not statistically significant and can be excluded from the regression equation. In our example the first coefficient is not statistically significant but both others are. The final form of the regression equation is as follows: 2

у(х) = 24,33488х – 0,20475х . It should be noticed that in terms of rigorous statistical approach a regression equation can be recognized as statistically significant if all of its parameters are statistically significant. If not – the equation is not statistically significant on this level of significance. On the last (fourth) stage researcher should make predictions that are based on our regression model. Predictions of sales volume for our example are the following: у(30) = 24,33488*30 – 0,20475*30*30 = 546; у(60) = 24,33488*60 – 0,20475*60*60 = 723. In this way quantitative methods of analysis let you definitely identify optimal advertising cost and find out how advertising costs affect the sales volume. In this model lots of other important variables were not taken into account therefore, of course, there is some kind of inaccuracy in it. Evaluating the effectiveness of advertising costs is an important task that is very essential during developing cost-effective advertising. It determines the development of direct and indirect ways and methods of preliminary evaluation of the effectiveness of advertising costs. Nowadays

101

XVII Modern Technique and Technologies 2011 integrated approach to assessing advertising effectiveness, taking into account its economic and communicative (mental) performance, sales increase and encouraging consumers to make the subsequent choice, is considered to be very perspective. Microsoft Excel has very friendly interface and can be widely used for this purpose. This technology of analyzing the advertising effectiveness basing on statistical methods is used for analyzing the marketing campaign by “Bio-icecream” Ltd. References 1. Lashkova, E.V., Kucenko, A.I. Marketing: practice research. – M.: Publishing Center “Academy”, 2008. – 240.

2. Furati, K.M., Zuhair Nashed, Abul Hasan Siddiqi. Mathematical Models and Methods for Real World Systems. - Chapman & Hall/CRC, 2005 – 455. 3. Phillips, J. Measured the Effectiveness of Your Advertising Campaign. – Access mode: http://www.articlesbase.com/marketingarticles/measuring-the-effectiveness-of-youradvertising-campaign-564280.html 4. Mokrov, A.V. Predicting the effectiveness of advertising campaign: the race in real time. – Access mode: http://www.sostav.ru/columns/opinion/2006/stat41/

CALCULATION AND VISUALIZATION OF THE X-POINT LOCATION FOR PLASMA FOR KTM TOKAMAK Khokhryakov V.S. Supervisor: Pavlov V.M., Assoc., PhD Tomsk Polytechnic University, 30, Lenin Avenue, Tomsk, 634050, Russia E-mail: [email protected] 1. Introduction Coordination of the Kazakhstan Tokamak for Material testing studies (KTM) supports ITER project in plasma material interaction investigations [1]. Thus KTM software support becomes extremely urgent. Accurate knowledge of the magnetic field structure and the current distribution in a tokamak is of fundamental importance for achieving optimum tokamak performance. Development of methods, particular algorithms and software for plasma’s magnetic surface recovery using external magnetic measurements is necessary to control the position and shape of the plasma in real time, and for solving other physical diagnosis and analysis in the interval between the discharges. The magnetic topology is first derived using the magnetic measurements, from which the shape and position of the last closed magnetic flux surface (LCFS) and the radial dependence of the relevant shape parameters (like elongation and triangularity) are determined.

102

Figure 1. Plasma’s magnetic surface 2. Diverter, control wheels In modern Tokamaks a much more complicated diverter configuration created by the coils of poloidal magnetic field. These coils are necessary even for a plasma of circular cross section: with their help create a vertical magnetic field component, which is in contact with the main current of the plasma allows the plasma loop is rolled to the wall in the direction of large radius. In the diverter configuration, the poloidal magnetic field coils arranged so that a section plasma was

Section VII: Informatics and Control in Engineering Systems elongated in the vertical direction. In this closed magnetic surface are only saved inside the separatrix, outside of its field lines go into the diverter chamber, where the neutralization of plasma flows resulting from the bulk. In the diverter chambers can alleviate the load from the plasma to the diverter plates due to complement. cooling plasma atomic interactions. 3. Methods of controlling the shape of the plasma Methods for plasma control have evolved in parallel with improvements in the estimation of plasma shape and position. The most recent change of control methodology has been the transition from so-called “gap control” to “isoflux” control which exploits the capability of the new real time EFIT algorithm to calculate magnetic flux at specified locations within the tokamak vessel. Real time EFIT can calculate very accurately the value of flux in the vicinity of the plasma boundary. Thus, the controlled parameters become the values of flux at prespecified control points along with the X-point r and z position. By requiring that the flux at each control point be equal to the same constant value, the control forces the same flux contour to pass through all of these control points. By choosing this constant value equal to the flux at the X-point, this flux contour must be the last closed flux surface or separatrix. The desired separatrix location is specified by selecting one of a large number of control points along each of several control segment. An X-point control grid is used to assist in calculating the X-point location by providing detailed flux and field information at a number of closely spaced points in the vicinity of the X-point.

Figure 2. Gradient descent method 5. Calculation and visualization Calculation and visualization of the X-point was conducted in accordance with the above algorithm of gradient descent method. This algorithm wasimplemented as custom prog ram written in environment C++ and included basic program reconstruction and visualization plasma p inch for KTMTokamak. The results of this work can be seen in the Figure 3

4. Algorithm for calculating the position of the X point There is an algorithm for calculating the position x - point. Gradient descent method most expedient for the task, as software resources for its implementation is the list. Mathematical basis of the gradient descent method is given below:

(4.1)

F ( Bz , Bτ ) = 0 (4.2) X 0 ( x10 , x20 ,..., xn0 ) → X 1 ( x11, x12 ,..., x1n ) ∂ F ( X ) (4.3) x 1n = x n0 − ∂xn ∂ F ( X ) (4.4) x ni + 1 = x ni − ∂xn

There is a (Figure 2) show a specific of graphical representation of this method. Solution to this problem is obtained by introducing some initial conditions. In the future, the algorithm itself leads to the desired solution:

Figure 3. Reconstructed configuration (with Xpoint) `This custom function gives the coordinates of x-points for each time of the plasma, and also allows you to graphically display the x-point cut in the plasma. 7. Conclusion and future developments Conclusion based on an analysis of the results obtained work includes following provisions: • Special program was established, which provides the calculation and visualization of the Xpoint in the KTM tokamak

103

XVII Modern Technique and Technologies 2011 • This program was included in the main text of the program of reconstruction and visualization of plazma • Numerical experiments were conducted to determine the accuracy and speed. As a result of numerical experiments for each time interval of 32 ms for 5 s was used to calculate the coordinates of the X-point. The result were obtained: Maximum error:

δ = 1 .2 % Run-time program

t = 0.22ms

The development of this custom function is only partof development management software plasma. Creatinga resourceefficient and competitive program is the main objective of the project. Prospects for improvement: • Maximum optimization of program code • Check the speed and rigor to computing resources

• Check the program directly to the actual conditions on the Tokamak KTM

8. References [1] E.A.Azizov, KTM project (Kazakhstan Tokamak for Material Testing), Moscow, 2000; X-point [2] L. Landau, E. Lifshitz, Course of Theoretical Physics, vol 8, Electrodynamics of Continuous Media 2nd ed. Pergamon Press, 1984; [3] Q. Jinping, Equilibrium Reconstruction in EAST Tokamak, Plasma Science and Technology, Vol.11, No.2, Apr. 2009. [4] W. Zwingmann, Equilibrium analysis of steady state tokamak discharges, Nucl. Fusion 43 842, 2003; [5] O. Barana, Real-time determination of internal inductance and magnetic axis radial position in JET, Plasma Phys. Control. Fusion 44, 2002; [6] L. Zabeo, A versatile method for the real time determination of the safety factor and density profiles in JET, Plasma Phys. Control. Fusion 44, 2002.

ASSESSING A CONDITION OF PATIENTS WITH LIMB NERVES TRAUMA USING WAVELET TRANSFORMS M.A. Makarov Language advisor: Yurova M.v. Tomsk Polytechnic University, 30, Lenin Avenue, Tomsk, 634050, Russia [email protected] Introduction Nowadays, the method of magnetic impulses impact on organism is widely used in medical practice. These impulses cause positive organism reaction in case of some diseases. Particularly, this method is used for regeneration of damaged limb nerve in SRI of balneology and physiotherapy in Tomsk. This method is called transcranial magnetic stimulation (TMS). In this method electromagnetic coil is placed on the scalp.

Amperage with great power steps in and off in electromagnetic coil. Magnetic stimulator Medtronik fixates biophase form of impulse in response to effect of magnetic field. This signal is called Induced Magnetic Reply (IMR). It is shown in figures 2(a,b).

Fig. 2a. Healthy person IMR sample

Fig. 2b. Unhealthy person IMR sample Fig. 1. Impact of magnetic field on central nervous system

104

Section VII: Informatics and Control in Engineering Systems This figures show that forms of signals IMR of a healthy and unhealthy person are different. A doctor, who rates these signals, has many problems with diagnostic of severity of injury. That’s why there exist two problems: 1) Mathematical description of signal; 2) Diagnostic of severity of injury with the help of this description. In this article, the solution of the first problem on the basis of mathematical description of signal with the help of wavelet transform is shown. Wavelet transform A wavelet is a wave-like oscillation with an amplitude that starts out at zero, increases, and then decreases back to zero. It can typically be visualized as a "brief oscillation" like one might see recorded by a seismograph or heart monitor. Generally, wavelets are purposefully crafted to have specific properties that make them useful for signal processing. Wavelets can be combined, using a "shift, multiply and sum" technique called convolution, with portions of an unknown signal to extract information from the unknown signal. For example, a wavelet could be created to have a frequency of Middle C and a short duration of roughly a 32nd note. If this wavelet were to be convolved at periodic intervals with a signal created from the recording of a song, then the results of these convolutions would be useful for determining when the Middle C note was being played in the song. Mathematically, the wavelet will resonate if the unknown signal contains information of similar frequency - just as a tuning fork physically resonates with sound waves of its specific tuning frequency. This concept of resonance is at the core of many practical applications of wavelet theory.[1] The samples of wavelets are shown on figures 3,4,5.

Fig. 3 Mayer wavelet

Fig. 5 Mexican hat wavelet Wavelet-technology of transformation and manipulation of signals are included in MATLAB, Mathcad and Mathematica. In this work wavelettransforms of medical signal with the help of Wavelet Toolbox in MATLAB are considered. [2] Working process To transform a signal IMR, a Mayer wavelet is used, because the coefficients of this wavelet can more accurately show the difference between a signal of a healthy person and a signal of unhealthy person, and a person after the treatment. 2D-graphics of wavelet-coefficients of a healthy person, unhealthy person and a person after the treatment are shown oin figures 6, 7, 8.

Fig. 6 Healthy person

Fig. 7 Unhealthy person

Fig. 4 Gaus wavelet Fig. 8 Person after treatment

105

XVII Modern Technique and Technologies 2011 It can be easily seen, that the activity of a bright area is reduced after the treatment. But visual assessment of this graphics is not enough. A more detailed representation of these coefficients is required. I have created 1D graphics of waveletcoefficients. These graphics show the trend of increase and decrease of these coefficients in time. One example of such graphics is presented on figure 9:

Fig. 10 Diagram of mean values of coefficients of a healthy, unhealthy person and a person after the treatment

Fig. 9 Trend of wavelet-coefficients of unhealthy person Averaging these trends of wavelet-coefficients gives us a diagram that shows mean values of coefficients of a healthy person, unhealthy person and a person after the treatment.

Red is used for unhealthy person; blue – for a healthy person and purple is for a person after the treatment. This diagram shows that after the treatment the patient comes closer to healthy status. Conclusion At present, wavelet-diagnostics of medical signal simplifies examination of a patient state and helps to assess his health condition. In the future, I am planning to assess the concrete severity with the help of wavelet-coefficients IMR. References 1. Wavelet [website] – access: http://en.wikipedia.org/wiki/Wavelet, free; 2. N.K. Smolencev: Fundamentals of Wavelet theory.

PROGRAM OF AUTOMATED TUNING OF CONTROLLER CONSTANTS Mikhaylov V.S., Goryunov A.G., Kovalenko D.S. Scientific adviser: Goryunov A.G., PhD, docent Language supervisor: Ermakova Ya.V., teacher. Tomsk Polytechnic University, 634050, Russia, Tomsk, 30 Lenin Avenue E-mail: [email protected] The aim of research is to develop program in the Matlab / Simulink programming environment for the automated tuning of controller constants using different methods and compare the results of the settings. The urgency of developing this program is as follows: 1. The possibility of automated tuning of controller constants using four different methods; 2. The possibility to compare the results of the settings “on the fly” using the Integrated Absolute Error criteria for quality estimation;

106

3. The possibility to compare the results of the settings “on the fly” using visual analysis of transient responses curves; 4. Simplicity of execution. Objectives: - The mathematical description of the model of the investigated object - the continuous stirred tank reactor; - Creation of the model of object with a built-in controller in the Matlab / Simulink programming environment; - Creation of the automated tuning program of controller constants using Ziegler-Nichols, Tyreus-

Section VII: Informatics and Control in Engineering Systems Luyben and Optimal Module empirical methods and the method of Minimization of the Integrated Absolute Controller Error; - Establishment of subsidiary software modules allowing the comparison of results of the tuning “on the fly” using visual analysis of transient responses curves and the Integrated Absolute Error criteria for the quality estimation; - Testing of the automated tuning program of controller constants on the model of continuous stirred tank reactor; - Estimation of the adequacy of results obtained by the developed program using «SAR-synthesis» program. It is required primarily to create a model of the process and adjust the control subsystem to create an automatic process control system. Therefore, it is important to know which settings of controller constants are best suited for the investigated process. The system of three continuous stirred tank reactors (SCSTR) is selected as the investigated object. The PI-controller is chosen as a controller. Continuous stirred tank reactor is a common ideal perfectly mixed reactor usually used in chemical engineering. It is usually characterized by following constants. Concentration inside reactor cA1 is the main parameter of the reactor. Residence time τ that is the average amount of time a discrete quantity of reagent spends inside the tank. The rate constant k characterizes the flow rate inside the reactor. The principle structure of SCSTR is introduced in Figure 1

manipulative concentrations; TI – integral controller time.

T

Our investigated object with PI-controller connected to it can be described by the following system of ODE [1]:

 dcA1 1  dt = τ (cA0 + cAm − cA1 ) − k ⋅ cA1,   dcA2 = 1 (c − c ) − k ⋅ c , A1 A2 A2  dt τ  dc 1  A3 = (cA2 − cA3 ) − k ⋅ cA3 , dt τ     1 set cAm = cAm + KC ( cAset3 − cA3 ) + ∫ ( cAset3 − cA3 ) dt  TI    c A1 c A 2 c A3 st nd , , – concentrations inside 1 , 2 and τ k rd – mean residence time, – 3 reactors; c A0 c Am constant;

,



initial

and

– gain constant,

The critical gain methods – Ziegler-Nichols and Tyreus-Luyben consist in the following. Formerly they are only empirical methods that start with the critical gain KCcrit of the proportional-only controller. Suitable controller constants are then calculated with respect to this critical gain value and the oscillation period Pcrit (at critical gain). For ZieglerNichols method for PI-controller KC = KCcrit/2.2 and TI=Pcrit/1.2. For Tyreus-Luyben method for PIcontroller KC = KCcrit/3.2 and TI= Pcrit/2.2. These controller constants have been calculated for our model and they are shown in Table 1 [1]. Optimal module method is also empirical method [2]. In this method controller constants are calculated with the help of huge complicated formulas, so they are skipped in our work. The results of calculation – controller constants KC and TI obtained by using Optimal module method are shown in Table 1. Automated tuning of controller constants using the method of Minimization of the Integrated Absolute Controller Error (IAE) consists in the following. Selection of controller constants is made in a way to obtain set of constants that correspond to a minimal IAE of Curves of transient responses. It is also called the two-parameter optimization. Calculation of the IAE is built into the model itself, so the resulting IAE value is one of the output coordinates of the model. Tuning of the constants is done by using a built-in function "fminsearch" of Matlab package. Controller constants, obtained for this method are given in Table 1. It is seen that the controller constants calculated by different methods can differ up to 4 times. Let's find out which set of controller constants is optimal in terms of the quality control.

Fig. 1. Principle structure of SCSTR

reaction

KC

Name of method

KC I 8,

IA E

0, 4, 1 67 97 20 0 0, 1, Tyreus-Luyben ,00 ,25 738 13 29 0 1, 1, Ziegler-Nichols ,10 ,66 273 41 Minimization of 11 0 0, 2, IAE ,30 ,58 596 41 Table 1. Parameters of controller and factors of transient responses curves Optimal module 00

R

Tc on

d 0 ,37 0 ,13 0 ,15 0 ,26 quality

Comparative curves of transient responses obtained by the developed program for different values of controller constants are shown in Figure 2. It is seen that the calculation of controller constants using Ziegler-Nichols and TyreusLuyben methods gives too large oscillations. Calculation using Optimal module method gives

107

XVII Modern Technique and Technologies 2011 the smallest oscillations, but in this case transient responses have too long time of establishing. Whereas the method of Minimization of the IAE gives fairly short time of establishing and an acceptable value of the oscillations.

Fig. 3. Diagram of the quality indicators area

–·– – Ziegler-Nichols; – – – Tyreus-Luyben; ── – Minimization of IAE; · · · – Optimal module Fig. 2. Curves of transient responses The values of the IAE obtained for different values of controller constants are given in Table 1. Judging by this parameter the best method of calculation is Minimization of the IAE method since it gives the smallest IAE. The results of verifying the model adequacy obtained by using the "CAP-synthesis" program are shown in Figure 3 as Diagram of the quality indicators area [3]. The numerical values of average control time (Tcon) and average dynamic control factor (Rd) are also presented in Table 1. It is evident that the Minimization of the IAE method gives us the optimal results since it combines a fairly small average control time and dynamic control factor with a reasonably wide range of variation.

108

Thereby it was shown that the method of Minimization of the Integrated Absolute Controller Error can be considered as optimal method in comparison to other tested methods by means of time of establishing, value of the oscillations, values of the IAE, average control time and dynamic control factor. These results were also seen visually on the transient responses curves and diagram of the quality indicators area. Literature 1. Petera K. Process Control: Resourses. – Czech Technical University in Prague, 2009. – P. 78-88. 2. Guretsky H. Analysis and synthesis of control systems with delay. - Moscow: Mashinostroenie, 1974. – P. 92-93. 3. ООО «TomIUS Project». «SAR-synthesis» User guide [electronic resource] – 2007 – The Microsoft Word document 2003 (.doc) . – P. 25.

Section VII: Informatics and Control in Engineering Systems A PRACTICAL APPLICATION AND ASSESSMENT OF MACHINE LEARNING TOOLS Moiseeva E.V. Scientific advisers: Kosminina N..M., De Decker A., Korobov A. V. Tomsk Polytechnic University, 634050, Russia, Tomsk, Lenina av., 30 E-mail: [email protected] 1. Introduction Machine learning, a branch of artificial intelligence, is a scientific discipline that is concerned with the design and development of algorithms that allow computers to evolve behaviors based on empirical data, such as from sensor data or databases. The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. Because training sets are finite and the future is uncertain, learning theory usually does not yield absolute guarantees of the performance of algorithms. Instead, probabilistic bounds on the performance are quite common. In addition to performance bounds, computational learning theorists study the time complexity and feasibility of learning. In computational learning theory, a computation is considered feasible if it can be done in polynomial time. There are two kinds of time complexity results. Positive results show that a certain class of functions can be learned in polynomial time. Negative results show that certain classes cannot be learned in polynomial time. There are many similarities between machine learning theory and statistics, although they use different terms. In this article machine learning tools are demonstrated recognizing complex patterns and solving regression tasks on the simple example of Breakout game (pic.1). The difficulty lies in the fact that the set of observed examples (the training data set – 25 games played by a human) is not enough to cover all possible behaviors given all possible inputs. So the automatic controller analyses given training set and apply mathematical tools to make decisions in future to obtain the highest possible score.

Pic.1. GUI of the game

The main purpose of the article is to show how theoretic tools can be judged numerically (the final score) and note the further appliances of the tools. 2. Feature Selection To ease the computations and avoid overfitting a feature selection (mutual information calculation and principal components analysis (PCA)) is made and different learning tools are performed with a lower sized data. A feature selection is an essential tool as it reduces the input file size dramatically. For example we don’t need a paddle to chase the ball in all its positions. Instead we take only the position of ball when it touches the paddle – that would be definitely an important input. Also the change of inputs during the game may be plotted – in this particular case it shows that horizontal speed of the ball doesn’t change, so we don’t take it as an input. 3. Applied Methods When the number of account is chosen and a created, we can use some tools to solve the problem possible scores.

features taken into training data set is of the most effective of achieving highest

3.1 Linear Regression In linear regression, models of the unknown parameters are estimated from the data using linear functions. It is probably the most elementary way to perform regression [1] and it has no hyper parameters to optimize. As a matrix inversion doesn’t cause any notable difficulties in this case (though usually it does if it is sparse), a pseudoinverse or gradient descent algorithms both can be used. 3.2 Multi-Layer Perceptron (MLP) The MLP is a neural network based on the perceptron model that uses differentiable functions as activation functions (unlike the perceptron). Usually 2-layers perceptron is enough to perform a needed transform [2]. The activation functions are hyperbolic tangents for the hidden layer and linear function for the output layer. Pic.2 shows the model of used perceptron. The number of hidden neurons is an optimized criterion for this model. The maximum is reached when the number of hidden neurons is set to 17.

109

XVII Modern Technique and Technologies 2011

Pic.2. The two-layers perceptron 3.3 Radial-basis Functions Network (RBFN) The RBFN is a network composed of two layers with radial activations functions in the hidden layer [3]. It is similar in form to the MLP, but the activation functions are Gaussians of the distance between each input and the centre of each neuron. There are many parameters to optimize in this kind of network, and the usual strategy is to set the centers and width of each neuron “by hand” (though in a smart way) and then to optimize the two layers of weights (pic.3).

Pic.4. KNN algorithm 4. Conclusion As it is shown in the Table 1, the best results were achieved with a kNN method, which means it is the most suitable method for the task. Though, other methods show good results (as the mean scores achieved by a human were around 100). The standard deviation is important parameter as it shows the stability of method. Method

Mean Score 62.2

Standard Deviation 16.8

Linear Model MLP 100 24.9 RBFN 91.3 17.4 kNN 119.5 20 Table 1. The results of final computation

Pic.3. The optimization of parameters for RBFN For optimizing the parameters a grid is created with width-scaling factor changing from 3 to 30 in 3 increments and the number of hidden neurons from 5 to 50 in 5 increments. 3.4 K’s Nearest Neighbor (KNN) The principle of method is quite simple [4]: the nearest neighbors of a new data “k” are analyzed to decide the actions for the k (pic.4). The number of neighbors taken into account is a hyper parameter to be optimized, so after long calculations it was set to 3.

110

However, it should be stressed that each exact task- the regression for power demand graph, currency exchange rate or even classification problems – need their one study for the best applicable tool. The results achieved prove that computational difficulties can be overcome without loss of important information. Literature 1. Michel Verleysen, Machine Learning: regression and dimensionality reduction. UCL, 2005 2. M. Hassoun, Fundamentals of artificial neural networks, MIT Press, 1995 3. An overview of Radial Basis Function Networks, J.Ghosh & A.Nag, in: Radial Basis Function Networks 2, R.J. Howlett & L.C. Jain eds., Physica-Verlag, 2001. 4. W. Hardle, et al. (2004): Nonparametric and semiparametric models. Springer.

Section VII: Informatics and Control in Engineering Systems

CONTROL SYSTEM OF RESOURCES IN TECHNICAL SYSTEMS AT LIQUIDATION OF EMERGENCY SITUATIONS Naumov I.S., Pushkarev A.M. The supervisor of studies: Pushkarev A.M., candidate of engineering sciences, professor Perm State Technical University, 614990, Russia, Perm, Komsomolsky Av. 29. E-mail: [email protected] Scales of emergency situations and damage from them constantly grow that demands operatively and soundly to develop measures for localization and liquidation of emergency situations. Control systems in the conditions of emergency situations are with that end in view created. In such situations it is necessary not only to define precisely level of danger and to develop the list of prime measures of counteraction, but also quickly and precisely to define structure of the resources necessary for counteraction emergency situations, and also ways and tactics of their use according to the chosen strategy of counteraction. As a rule, there is a set of variants of counteraction arisen emergency situations. In a situation when emergency situations develops promptly that causes necessity of operative decision-making, the probability of acceptance of erroneous decisions increases that, as it is known, strongly influences the counteraction end result. Even in a case when information for decision-making enough, as a rule, are made the determined decisions offering one, it is far not an optimum variant of counteraction. All above-stated is fair for a situation when there was one emergency situation. However such situation is вырожденным a case: as a rule, in reality occurrence of one emergency situation is at the bottom of development of several indirectly connected with it emergency situations, that is there is complex emergency situations. In such situation the above described problems become insuperable. The decision of a problem of the automated designing and operational planning of measures of counteraction to complex emergency situations is carried out by means of modern methods and models. Unfortunately, numerous examples at us in the country and abroad show that quite often it appears the convincing information insufficiently that from a management fast reaction on arising emergency situations, fast reciprocal actions has followed. The principal causes causing such delay is: a lag effect of information system, check of reliability of the information on occurrence emergency situations, psychological features of the person. Hence, the accurate picture on occurrence is necessary, for localization and this or that liquidation emergency situation. For a sustainable development of any enterprise and the country acceptance of

measures on reduction of a damage caused emergency situations and quantities of the resources used at its prevention and liquidation as a whole is necessary. Versatile problems which should dare in interests of management of risk, lean against such high technology spheres, as physical mechanisms of development of emergencies and failures, formations of the dangerous natural phenomena, models and methods of the forecast of force, time and a place of their occurrence, ways of prevention of their occurrence, decrease in force or softening of consequences emergency situations, economic researches, methods of optimum planning. Development of system of the prevention of the dangerous phenomena, ways of reduction of danger and softening of consequences emergency situations is considered one of priority spheres of activity at all levels − international, state, regional and local. However the dangerous natural and technogenic phenomena as a source of emergency situations can be predicted only on very small from the point of view of carrying out of preventive actions time intervals. It leads to necessity of use as the initial given frequencies of these events. Perfection of the control systems focused on localization and liquidation emergency situations is necessary. This perfection can be provided with following parameters: a substantiation of productivity of the equipment; a substantiation of the means necessary for the maintenance of staff and their equipment by means; a substantiation of structure of systems of localization and liquidation of emergency situations. The effective preventive plan is formed on the basis of optimum distribution of resources, forces and the means necessary for realization of actions for the purpose of greatest possible blocking emergency situations. The basic criteria of formation of the optimum preventive plan under the prevention and liquidation of consequences emergency situations are a damage minimum; a minimum of the general expenses for realization of preventive actions; a minimum of general time of realization of operative actions for liquidation emergency situations and its consequences. As restrictions on total amounts of resources, forces and the means allocated for realization of actions, for presence of necessary forces and means in points of their disposition,

111

XVII Modern Technique and Technologies 2011 structural restrictions on communications emergency situations and spent actions are used. Priorities in control system emergency situations consist in a finding of optimum (rational) distribution of the available personnel and the equipment on objects on which have arisen emergency situations, and also in definition of necessary structure of the personnel and the equipment and their quantity for achievement of objects in view. Application of standard methods for the decision of problems of such class can be successful enough. In organizational-methodical instructions on preparation of controls, forces of civil defense and uniform state system of the prevention and liquidation of emergency situations the problem of working out of the special models providing creation of a scientifically-methodical basis for resource management is directly put. But the researches executed by this time, don't allow to give well-founded answers to questions on what resources, in what quantity where to place and how to use taking into account a complex of real conditions that they have provided the maximum effect from application in emergency situations. Necessity of search of answers to these questions in the absence of the scientific device of definition of optimum values of parameters and strategy of functioning of system of maintenance by resources for liquidation emergency situations make an essence of the developed contradiction. In «the Concept of national safety of the Russian Federation» it is marked «...Are necessary the new approach to the organization and civil defense conducting on territories of the Russian Federation, qualitative perfection of uniform state system of the prevention and liquidation of emergency situations...» [1] One of directions of realization of such approach is creation of complex systems of reaction on emergency situations which in condition to provide performance of following

uniform international and national requirements: 1) an effective utilization of all accessible resources; 2) supervision, the analysis and an estimation of risk possible emergency situations; 3) presence of the uniform information system guaranteeing situational competence in real time; 4) exact distribution of duties of all levels of management. That a counteraction determinative to catastrophic development of emergency situations presence of the corresponding resources which operative use reduces is abundantly clear or prevents a possible damage. To such resources, besides the experts possessing special knowledge, it is necessary to carry, first of all, units and the systems capable with high efficiency to localize or liquidate negative consequences of emergency situations. Therefore it is necessary to investigate managerial process by resources for localization and liquidation emergency situations on spatially distributed industrial targets, and a subject in research, there should be laws of influence of a condition of system of maintenance resources and strategy of its functioning on results of management of processes of localization and liquidation emergency situations on spatially distributed industrial targets. Literature 1. The decree of the President of the Russian Federation from May, 12th, 2009 № 537 «About strategy of national safety of the Russian Federation till 2020». 2. Pilishkin V.N. General Dynamic Model of the System With Intelligent Properties in Control Tasks // Proc. of the 15th IEEE International Symposium on Intelligent Control (ISIC-2000), Rio, Patras, Greece, 17-19 July, 2000, − P. 223-227. 3. Antonov G.N. Methods of forecasting of technogenic safety of difficult organizationaltechnical systems // Problems of management of risks in a technosphere, volume 5, 2009, № 1 – P. 15-21.

LABORATORY FACILITIES FOR STUDYING INDUSTRIAL MICROPROCESSOR CONTROLLER SIMATIC S7-200 Nikolaev K.O. Scientific supervisor: Skorospeshkin M.V., associate professor Language supervisor: Pichugova I.L., senior teacher Tomsk Polytechnic University, 30, Lenin Avenue, Tomsk, 634050, Russia E-mail: [email protected] Introduction allows to monitor the course of any process mechanically and visually. Nowadays programmable controllers Simatic Programmable logic controllers Simatic S7-200 are widely used in the oil and gas industries. are ideal for building effective systems of Particularly effective is the use of two controllers automatic control at minimum cost for purchasing together, Siemens S7-200 and TD-200, since it equipment and developing the system. The

112

Section VII: Informatics and Control in Engineering Systems controllers can operate in real time or can be used to construct the units of local automation systems and distributed I / O data exchange via the PPI or MPI interface, industrial networks PROFIBUS-DP, Industrial Ethernet and AS-Interface, modem communication systems. The package for programming STEP 7Micro/WIN provides user-friendly environment for developing, editing, and controlling the logic needed to control your application. STEP 7Micro/WIN has three program editors with which you can conveniently and efficiently develop a control program for your application. To assist you in finding the required information STEP 7Micro/WIN offers extensive online help system and a CD with documentation that contains an electronic version of the guidelines, tips on application and other useful information.

This laboratory complex allows to input / output digital signals, to program controllers in various languages. "Traffic Light" program was implemented as an example of the complex work. It is shown in Figure 2:

Fig. 1. General view of the laboratory complex The laboratory complex includes the following devices: 1. controller table, which contains the following elements: a) Power supply controller-LOGO! Power 6EP1332-1SH42 b) Unit controller SIMATIC STEP 7-200 (CPU 224) c) communication processor CP243-1IT d) Text Display e) PPI / USB (converting interface) f) Terminal-Block Connector 2. control buttons box for input signals; 3. computer monitor; 4. complex computer; 5. work station desks with slide out keyboard drawer. This laboratory complex consists of the industrial controller Simatic S7-200, input devices of discrete signals, output devices of digital signals; communication processor CP 243-1 IT, which provides communications between the controller and the computer via Ethernet, TD 200 text display for programmable controllers S7-200 which is used for a fixed installation or as a handheld device, a PPI / USB - RS485 communication cable, a PC with software package Step7Micro/WIN installed.

Fig.2. Traffic Light Program in working condition In this example, the commands from the family «Bit Logic» are used. Bit instructions designed to perform operations on Boolean variables (one of two values: 0 or 1), the result of their performance is a variable of Boolean type. Let us consider the following commands: -Closing Contact

-Opening Contact

These commands get the value from memory or from the process image register if the data type is I and Q. In blocks AND [AND] and OR [OR], we can use maximum seven inputs. Circuit closing contact is closed (enabled) when the bit is 1. Circuit opening contact is closed (enabled) when the bit is 0. In FBD commands corresponding to circuit closing contacts are represented by blocks

113

XVII Modern Technique and Technologies 2011 AND / OR [AND / OR]. These commands can be used to manipulate Boolean signals in the same way as LAD contacts. Commands corresponding to circuit opening contact are also presented in blocks. Commands corresponding to circuit opening contacts are constructed by placing the symbol of negation at the level of the input signal. The number of inputs in blocks AND [AND] and OR [or] may be increased up to seven maximum.

Output (output);

When the command ‘output’ is carried out, the process image register set of a bit. In FBD while the command ‘output’ is being carried out, the bit is set equal to the signal flux.

Positive ransition; Negative Transition

The contact ‘Positive transition’ passes the signal flux within one cycle for each occurrence of the positive front. The contact ‘Negative transition’ passes the signal flux within one cycle for each occurrence of the negative front. In FBD these commands are represented in blocks P and N. When pressed, the output switches from 0 to 1 as it is shown in Figure 2. In this case, the output

changes the color that shows the efficiency of the program. When pressed again, we can see that the output address Q 0.0 switches from 1 to 0 as shown in Figure 2. When the button is pressed, the LEDs on the front panel of the controller are lit. They correspond to the input (the lower row of indicators) and output (top row of indicators). The burning indicator corresponds to the value of the variable equal to one. Conclusion Methodical software of the laboratory complex is a set of programs to study programming industrial controllers Simatic S7-200 in languages LAD and FBD, and teaching aid in the form of a guidance for laboratory works. The developed software and methodological support to study programming industrial controllers Simatic S7-200 are used in the educational process of the Department of Automatic Equipment and Computer Systems for students of educational line 220400 “Engineering System Control”. Reference 1. Mitin G.P., Khazanov O.V. Automation Systems by Using Programmable Logic Controllers: Textbook. – M.: IC MSTU Stankin, 2005. – 136s. 2. Shemelin V.K., Khazanov O.V. Management Systems and Processes: A Textbook for universities. – Stary Oskol: OOO "TNT", 2007. – 320s. 3. Zyuzev A.M., Nesterov K.E. STEP7 – MICRO/WIN 32 in the examples and tasks: A number of tasks for laboratory works. – Yekaterinburg: Ural State Technical University – UPI, 2007. – 27c.

COMPARISON OF ACCOUNTING SOFTWARE Nikulina E.V. Scientific advisor: Aksenov S.V., associate professor Language advisor: Yurova M.V., senior teacher of English Tomsk Polytechnic University, 30, Lenin Avenue, Tomsk, 634050, Russia E-mail: [email protected] One of the important tasks of accounting department is to prepare accounting reports of various complexities. Of course, such work can be done by an accountant, but it is really difficult. Therefore, there are a lot of accounting programs, which can help accountants to work.

114

Creation of proprietary accounting software needs great work from programmers and a lot of money. Thus, the creation of own accounting software of each company is not profitable. To do this specialized accounting software is widely spread and used.

Section VII: Informatics and Control in Engineering Systems Classical Russian accounting complex consists of the following components: a chart of account, business transactions journal, the log of order, general ledger, reports on analytical accounts, balance sheet, financial reporting forms, cash and bank. All modern accounting programs are based on the creation of documents for the enterprises. The process of working with this program is the following: the accountant enters the primary documents, for example, credit cash order, acceptance certification and others, into the program and then they are processed by the program. The result of this process is generated business transactions. Each business transaction is a set of accounting transactions. So, the main goal of accounting tasks automation is providing automatic generation of business transactions, and also providing convenient storage and analyzis of accounting information. The most famous and popular Russian developers of automated accounting systems are “1C” (a series of programs “1C: Enterprise”), “IT” (the family of “BOSS”), “Atlant - Inform” (series “Accord”), “Galaxy - Sail” (a series of programs “Galactica” and “Parus”), “DRC” (“Turbo Accounrant”), “Intelligence – the service” (a series of “BEST”), “Infinitesimal” (a series of software products from “minimum” to “maximum”), “Informatics” (“Info - Accountant”), “Infosoft” (“Integrator”), “Omega” (a series of “Abacus”), “Tsifey” (“Standards”) and “R-Style Software Lab” (“Universal Accounting Cyril and Methodius”, a series RS - Balance). Nowadays “1C” is the most famous and sold products in Russia. The popularity of this program is provided by powerful advertising, extensive dealer network, low price and competent marketing strategy. The main feature of the system is the scheme “accounting transaction – general ledger – balance sheet”. The basic package includes a set of loadable forms of primary documents, which can be reconfigured, if it is necessary, changed in the form and the filling algorithm. The flexibility of the platform allows using this software in various areas. “Info-Accountant” is a Russian company. The main activity is development of computer programs to automate the accounting records in commercial and nonprofit organizations, as well as in public institutions. More than 18 years, “InfoAccountant” is a leading developer of automation software for accounting, tax, inventory and personnel records, which is easy to learn and easy to use. Unlike other accountant software, “InfoAccountant” is a complex program of automation accounting. All sections of accounting and tax records are included in the basic distribution version. In further work with the program users needn’t buy various additions, for example, “Salary and Personnel”.

Corporation “Galaxy - Sail” offers a program called “Sail”. The development is designed for small and medium-sized enterprises in various fields of activity. It allows to automate not only accounting but also financial and economic activities of enterprises. From traditional accounting software, this system differs in convenience to work, simplicity to work and low price. Also, the system solves the tasks of management accounting, for example, profit calculating and support of decision-making in management. System “BEST” is a trading system, but nevertheless provides automation of all main areas accounting for the enterprises. “BEST” is a closed system and cannot be changed by the user. Software company conducts the modification of basic modules which are adapted to the specifics. This software is complex automation system for accounting, tax and management for small and medium enterprises, which work in commerce, manufacturing, services and other enterprises. Also, there is a vast range of small accounting programs, which can do one accounting tasks, for example, programs to work with personnel, calculate salary, make different types of report, make report in Pension Fund or Tax Administration. Some of such programs are free; users can download them via Internet. Moreover, there are programs, which can be used for enterprises with simplified tax system or usual tax system. All changes in legislation are taken into account in the future updates of programs. Usually updates come once a month, and if it is necessary more often. There are two accounting programs, which are most distributed, such as “1C” and “Info Accountant”. Now compare them. Table 1 shows differences between the functions of these accounting software and Table 2 shows identical functions. Table 1. Differences between the functions of accounting software Characteristic 1C InfoAccountant Updated the Impossible To 10 updates program inside of version Description of th Very little Enough e program help complete for file all sections Any number of Coding of Needed symbols accounts installation plan of card of accounts Number of Five Unlimited subaccounts levels Preliminary To get the There is no account of correct need.

115

XVII Modern Technique and Technologies 2011 results

Reports in graphical form Updating document formation

after

Transferring data between business transactions journal

Exchanging data with other programs

report, accountant needs do it. It is only possible in monopolistic mode. Impossible

Possible, but sometimes it leads to damage of documents by inexperience d users. It is possible, but the number of characteristic is limited. Also there is no control over the repetition of identical transactions. There is exchanging information with data base DBF.

Table 2. Similarities accounting software Characteristic 1C Characteristic of accounts

Storing of accounts The integrity from journal of accountant transaction Coding of accounting

116

in

the

Possible in all graphical forms Impossible

It is possible for all characteristics .

There is exchanging information approximately all data bases. functions

of

InfoAccountant Each accounts (and subaccounts) have definition in the section of balance: active, passive, active-passive and off-balance. Storing is available to last level of subaccounts. It is complete with automatic updates data bases.

Number of symbols is not limited.

codes Centralized It is available in each updating accounting programs. documents Opportunity to It is available in each work for several accounting programs. enterprises Opportunity to It is available in each accounting programs. independently adapt standard business transactions So, “1C: Enterprise” can be used for companies with different activities, from small shop to large corporation. Most of budgetary institutions and state-financed organizations choose this accounting program, for example, administrative authorities, Pension Funds, Tax Administrations and many others. This product is more widespread in Russia than “Info-Accountant”. It is used by organizations, which work in manufacturing, commerce, services and others. “Info-Accountant” can also be used for enterprises with different activities. There are special programs for companies, which work in different sphere and with different tax system, for example programs for companies with simplified tax system or usual tax system, to work with personnel and salary, warehouse and others. Nowadays there is a lot of accounting software in the market. All software has standard functions and additional, in which they differ. The choice of each company depends on the size of company, their budgets and activities. Beyond a doubt small company can’t buy expensive accounting software, also they needn’t all functions of such programs, so they buy chip software with standard operations. But there are a lot of huge companies with a vast range of activities, and they need to have complex software to complete all tasks. References 1. Anton Gagen. Accounting software. Overview of the major accounting software. [Electronic version]. Information Agency “Financial Lawyer”. 12.06.2008. http://www.financiallawyer.ru/newsbox/document/165-528055.html 2. http://www.buhsoft.ru/?title=about.php 3. http://www.snezhana.ru/buh_report/ 4. http://1c.ru/ 5. http://parus.ru/ 6. http://www.aton-c.ru/105.html

Section VII: Informatics and Control in Engineering Systems

SPHERICAL FUNCTIONS IN METHODS OF LIGHTING PROCESSING Parubets V.V. Research advisor: Professor Berestneva O.G. Tomsk Polytechnic University, 634050, 30 Lenin av., Tomsk, Russia E-mail: [email protected] The level of realistic images in modern video games depends on quality of lighting. Despite the existing mathematical models and a lot of optimization methods, possible complexity of the scene, amounts of different light sources, as well as various types of material objects in the scene makes lighting calculations non-trivial and requires massive computing power. In the classic form lighting calculation is presented by the following model [11]:

L( x, ω 0 ) = Le ( x, ω 0 ) + + ∫ f r ( x, ω i ) L( x' , ω i )G ( x, x' )d ω i

(1) ,

S

L ( x, ω0 )

– intensity of the reflected light flux

from x in ω0 direction;

Le ( x, ω0 )

– flux intensity of

f ( x, ω → ω )

r i 0 – light emitted by the surface; Dual-beam light distribution function of the surface at x, transforming light ωi into reflected light ω0;

L ( x ' , ωi )

– flux intensity of light coming from the

G ( x, x ' ) – other objects in the direction ωi; geometric relationship between x и x' . Bidirectional function of the reflected light (BFRL) is defined as the ratio of the amount of energy (light), as reflected in the direction ω0 to the amount of energy that falls on the surface with the direction ωi. Let the amount of energy reflected in the direction ω0 is L0, and the amount of energy that came from the direction ωi equals Ei, than BFRL is:

BFRL(ω 0 , ω i ) =

L0 Ei

(2) , ω0 и ωi — differential solid angles, which can be uniquely two angles in spherical coordinates (azimuth and zenith). BLDF is defined as the ratio of the amount of energy (light), scattered in the direction ω0 to the amount of energy that falls on the surface from all directions of the visible hemisphere. Let the amount of energy dissipated in the direction ω0, is equal L0. Let’s consider a uniform distribution of the scattered light, thus BLDF is independent from the direction of gaze (ω0). Including BLDF visibility function,will make it self-shadowing:

BLDF (ω i ) =

L Vi Ei

(5),

L – the amount of energy scattered equally in all directions ω0 – takes the value 0 or 1, depending on whether, stream of light coming in this direction is overlapped by object geometry or not. Thus, using BLDF we can represent any point of the object without view-depending lighting effects(hotspots, etc.) BLDF is nothing more than a scalar field on a sphere (ωi can be uniquely represented as a point with a value on the unit sphere). This begs the question of approximation of this function in a convenient basis for our functional area. Under such a basis is very suitable basis of associated spherical functions of real variable, forming a complete orthonormal basis functions on the sphere [5, 8, 9]:

 2Klm cos(mϕ)Plm (cos(θ)), m > 0  ylm (θ,ϕ) =  2Klm sin(−mϕ)Pl−m (cos(θ)), m < 0 (6), K0 P0 (cos(θ)), m= 0  l l

(2l + 1)(l − m )!

K lm =

4π (l + m )!

– normalization coefficient, Pl (x) – adjoint Legendre polynomial. Thus, DFRS represented in this basis as: m

n

l

f x (θ , ϕ ) ≈ ∑ ∑ clmYl m (θ , ϕ )

(7),

l =0 m = − l

where the coefficients are defined as:

c lm =



f x ( s )Y l m ( s ) d s .

(8)

S

This method worked well in video games [7] and the creation of visual effects in movies [1]. The brightness of each object point is calculated as: (9), L = kL + (l − k ) L , D

S

k – diffuse reflectance coefficient (k