Thursday, September 13, 2012

A Brief History of Phased Array Testing

During their first couple decades, commercial ultrasonic instruments relied entirely on single-element transducers that used one piezoelectric crystal to generate and receive sound waves, dual element transducers that had separate transmitting and receiving crystals, and pitch/catch or through transmission systems that used a pair of single-element transducers in tandem. These approaches are still used by the majority of current commercial ultrasonic instruments designed for industrial flaw detection and thickness gaging, however instruments using phased arrays are steadily becoming more important in the ultrasonic NDT field.
The principle of constructive and destructive interaction of waves was demonstrated by English scientist Thomas Young in 1801 in a notable experiment that utilized two point sources of light to create interference patterns. Waves that combine in phase reinforce each other, while waves that combine out-of-phase will cancel each other.
Phase shifting, or phasing, is in turn a way of controlling these interactions by time-shifting wave fronts that originate from two or more sources. It can be used to bend, steer, or focus the energy of a wave front. In the 1960s, researchers began developing ultrasonic phased array systems that utilized multiple point source transducers that were pulsed so as to direct sound beams by means of these controlled interference patterns. In the early 1970s, commercial phased array systems for medical diagnostic use first appeared, using steered beams to create cross-sectional images of the human body.
Initially, the use of ultrasonic phased array systems was largely confined to the medical field, aided by the fact that the predictable composition and structure of the human body make instrument design and image interpretation relatively straightforward. Industrial applications, on the other hand, represent a much greater challenge because of the widely varying acoustic properties of metals, composites, ceramics, plastics, and fiberglass, as well as the enormous variety of thicknesses and geometries encountered across the scope of industrial testing. The first industrial phased array system, introduced in the 1980s, was extremely large, and required data transfer to a computer in order to do the processing and image presentation. These systems were most typically used for in-service power generation inspections. In large part, this technology was pushed heavily in the nuclear market, where critical assessment more greatly allows use of cutting edge technology for improving probability of detection. Other early applications involved large forged shafts and low pressure turbine components.
Portable, battery-powered phased array instruments for industrial use appeared in the 1990s. Analog designs had required power and space to create the multi-channel configurations necessary for beam steering, but the transition into the digital world and the rapid development of inexpensive embedded microprocessors enabled more rapid development of the next generation phased array equipment. In addition, the availability of low power electronic components, better power-saving architectures, and industry-wide use surface mount board design led to miniaturization of this advanced technology. This resulted in phased array tools which allowed electronic setup, data processing, display and analysis all within a portable device, and so the doors were opened to more widespread use across the industrial sector. This in turn drove the ability to specify standard phased array probes for common applications.

Introduction to Ultrasonic Testing


Ultrasonic test instruments have been used in industrial applications for more than sixty years. Since the 1940s, the laws of physics that govern the propagation of high frequency sound waves through solid materials have been used to detect hidden cracks, voids, porosity, and other internal discontinuities in metals, composites, plastics, and ceramics, as well as to measure thickness and analyse material properties. Ultrasonic testing is completely nondestructive and safe, and it is a well-established test method in many basic manufacturing, process, and service industries, especially in applications involving welds and structural metals.
The growth of ultrasonic testing largely parallels developments in electronics, and later in computers. Early work in Europe and the United States in the 1930s demonstrated that high frequency sound waves would reflect from hidden flaws or material boundaries in predictable ways, producing distinctive echo patterns that could be displayed on oscilloscope screens. Sonar development during the Second World War provided further impetus for research in ultrasonics. In 1945, US researcher Floyd Firestone patented an instrument he called the Supersonic Reflectoscope, which is generally regarding as the first practical commercial ultrasonic flaw detector that used the pulse/echo technique commonly employed today. It would lead to the many commercial instruments that were introduced in the years that followed. Among the companies that were leaders in the development of ultrasonic flaw detectors, gages, and transducers in the 1960s and 1970s were Panametrics, Staveley, and Harisonic, all of which are now part of Olympus NDT.
In the late 1940s, researchers in Japan pioneered the use of ultrasonic testing in medical diagnostics using early B-scan equipment that provided a two-dimensional profile image of tissue layers. By the 1960s, early versions of medical scanners were being used to detect and outline tumors, gallstones, and similar conditions. In the 1970s, the introduction of precision thickness gages brought ultrasonic testing to a wide variety of manufacturing operations that required thickness measurement of parts in situations where there was access to only one side, and corrosion gages came into wide use for measurement of remaining wall thickness in metal pipes and tanks.
The latest advances in ultrasonic instruments have been based on the digital signal processing techniques and the inexpensive microprocessors that became available from the 1980s onward. This has led to the latest generation of miniaturized, highly reliable portable instruments and on-line inspection systems for flaw detection, thickness gaging, and acoustic imaging.

Wednesday, September 12, 2012

API 570 Examination Formulas


The minimum thickness, T, for the pipe selected, considering manufacturer’s minus tolerance, shall be not less than tm.
(b) The following nomenclature is used in the equations for pressure design of straight pipe:
c = sum of the mechanical allowances (thread or groove depth) plus corrosion and erosion allowances. For threaded components, the nominal thread depth (dimension h of ASME B1.20.1 or equivalent) shall apply. For machined surfaces or grooves where the tolerance is not specified, the tolerance shall be assumed to be 0.5 mm (0.02 in.) in addition to the specified depth of the cut.
D = outside diameter of pipe as listed in tables of standards or specifications or as measured
d = inside diameter of pipe. For pressure design calculation, the inside diameter of the pipe is the maximum value allowable under the purchase specification. E = quality factor from Table A-1A or A-1B, P = internal design gage pressure, S = stress value for material from Table A-1, T = pipe wall thickness (measured or minimum in accordance with the purchase specification), t = pressure design thickness, as calculated in accordance with para. 304.1.2 for internal pressure or as determined in accordance with para. 304.1.3 for external pressure, tm = minimum required thickness, including mechanical, corrosion, and erosion allowances, W = weld joint strength reduction factor in accordance with para. 302.3.5(e), Y = coefficient from Table 304.1.1, valid for t < D/6 and for materials shown. The value of Y may be interpolated for intermediate temperatures. For t ≥ D/6, Y = d + 2c /D + d + 2c