Teses em Geofísica (Doutorado) - CPGF/IG
URI Permanente para esta coleçãohttps://repositorio.ufpa.br/handle/2011/2357
O Doutorado Acadêmico pertente a o Programa de Pós-Graduação em Geofísica (CPGF) do Instituto de Geociências (IG) da Universidade Federal do Pará (UFPA).
Navegar
Navegando Teses em Geofísica (Doutorado) - CPGF/IG por CNPq "CNPQ::CIENCIAS EXATAS E DA TERRA::GEOCIENCIAS::GEOFISICA::GRAVIMETRIA"
Agora exibindo 1 - 4 de 4
- Resultados por página
- Opções de Ordenação
Item Acesso aberto (Open Access) Interpolação de dados de campo potencial através da camada equivalente(Universidade Federal do Pará, 1992-09-15) MENDONÇA, Carlos Alberto; SILVA, João Batista Corrêa da; http://lattes.cnpq.br/1870725463184491The equivalent layer technique is an useful tool to incorporate (in the process of interpolation of potential field data) the constraint that the anomaly is a harmonic function. However, this technique can be applied only in surveys with small number of data points because it demands the solution of a least-squares problem involving a linear system whose order is the number of data. In order to make feasible the application of the equivalent layer technique to surveys with large data sets we developed the concept of equivalent data and the EGTG method. Basically, the equivalent data principle consists in selecting a subset of the data such that the least-squares fitting obtained using only this selected subset will also fit all the remaining data within a threshold value. The selected data will be called equivalent data and the remaining data, redundant data. This is equivalent to splitting the original linear systems in two sub-systems. The first one related with the equivalent data and, the second one, with the redundant data in such way that, the least-squares solution obtained by the first one, will reproduce all the redundant data. This procedure enables fitting all the measured data using only the equivalent data (and not the entire data set) reducing, in this way, the amount of operations and the demand of computer memory. The EGTG method optimizes the evaluation of dot products in solving least-squares problems. First, the dot product is identified as being a discrete integration of a known analytic integral. Then, the evaluation of the discrete integral is approximated by the evaluation of the analytic integral. This method should be applied when the evaluation of analytic integral needs less computational efforts than the discrete integration. To determine the equivalent data we developed two algorithms namely DOE and DOEg. The first one identifies the equivalent data of the whole linear systems while the second algorithm identifies the equivalent data in sub-systems of the entire linear systems. Each DOEg's iteration consists of one application of the DOE algorithm in a given subsystem. The algorithm DOE yields an interpolating surface that fits all data points allowing a global interpolation. On the other hand, the algorithm DOEg optimizes the local interpolation because it employs only the equivalent data while the other current algorithms for local interpolation employ all data. The interpolation methods using the equivalent layer technique was comparatively tested with the minimum curvature method by using synthetic data produced by prismatic source model. The interpolated values were compared with the true values evaluated from the source model. In all tests, the equivalent layer method had a better performance than the minimum curvature method. Particularly, in the case of bad sampled anomaly, the minimum curvature method does not recover the anomalies at the points where the anomaly presents high curvature. For data acquired at different levels, the minimum curvature method presented the worse performance while the equivalent layer produced very good results. By applying the DOE algorithm, it was possible to fit, using an equivalent layer model, 3137 gravity free-air data and 4941 total field anomaly data from the marine Equant-2 Project and the aeromagnetic Carauari-Norte Project, respectively. The DOEg algorithm was also applied in the same data sets optimizing the local interpolation. It is important to stress that none of these applications would have been possible without the concept of equivalent data. The ratio between CPU times (executing the programs with the same memory allocation) required by the minimum curvature method and the equivalent layer method in global interpolation was 1:31. This ratio was 1:1 in local interpolation.Item Acesso aberto (Open Access) Inversão de momentos de fonte em métodos potenciais(Universidade Federal do Pará, 1993-08-16) MEDEIROS, Walter Eugênio de; SILVA, João Batista Corrêa da; http://lattes.cnpq.br/1870725463184491The inversion of three-dimensional gravity source moments is analyzed in two situations. In the first one only the anomalous field is assumed to be known. In the second situation a priori information about the anomalous body is assumed to be known besides the field data. Without using a priori information, we show that it is possible to determine uniquely any moment, or linear combination of moments, whose polynomial kernel: (a) is not a function of the Cartesian coordinate which is orthogonal to the measuring plane and (b) has null Laplacian. Besides, we show that it is impossible to determine any moment whose polynomial kernel has non-null Laplacian. On the other hand, we show that a priori information is implicitly introduced if the source moment inversion method is based on the approximation of the anomalous field by the truncated series obtained from its multipole expansion. Given any center of expansion, the series truncation impores a regularization condition on the equipotential surfaces of the anomalous body that allows the moments and linear combination of moments (which are the coefficients of the multipole expansion basis function) to be uniquely estimated. So, a mass distribution equivalent to the real mass distribution is postulated, being the equivalence criterion specified by the fitting conditions between the observed anomaly and the anomaly calculated with the truncated multipole expansion series. The highest order for the retained terms in the truncated series is specified by the previously defined maximum order for the moments. The moments of the equivalent mass distribution were identified as the stationary solution of a system of first order linear differential equations, for which uniqueness and assymptotic stability are assured. For the series having moments up to 2nd order, it is implicitly assumed that the anomalous body: (1) has finite volume, (2) that it is sufficiently far from the measuring plane and (3) that its spatial naass distribution is convex and presents three orthogonal planes of symmetry. The source moment inversion method based on the approximation of the anomalous field by a truncated series (MIT) is adapted to the magnetic case. In this case, we show that in order to guarantee uniqueness and assymptotic stability it is sufficient to assume, besides the regularization condition, that the total magnetization has constant but unknown direction. The MIT method based on the 2nd order series (MIT2) is applied to three-dimensional synthetic gravity and magnetic anomalies. If the source satisfies all imposed conditions, we show that it is possible to obtain in a stable way good estimates of the total anomalous mass or dipole moment vector, of the position of center of mass or dipole moment and of the directions of all three principal axes. A partia' failure of MIT2 method may occur either if the source is dose to the measuring plane or if the anomaly presents a localized but strong effect due to a shallow and small body and an attempt is made to estimate the moments of a large and deep body. By partial failure we mean the situation when some of the estimates may be poor aproximations of the true values. In these two cases we show that the estimates of the depth and the directions of the principal axes of the (main) source may be poor but the estimates of the total anomalous mass or dipole moment vector and the projection on the measuring plane of the center of mass or dipole moment of the source are good. If the total magnetization direction is not constant, MIT2 method may produce poor estimates of the directions of the principal axes (even if the source is far from the measuring plane) but good estimates are obtained for the other parameters. A complete failure of MIT2 method may occur if the source does not have finite volume. By complete failure we mean the situation when any obtained estimate may be a poor aproximation of the true value. MIT2 method is applied to real gravity and magnetic data. In the gravimetric case we used an anomaly located in Bahia state, Brazil, which is assumed to be produced by the presence of a large granitic body. Based on the inversion results, we propose that the grafite was deformed into an oblate ellipsoid during the compressive event that generated the Middle Proterozoic Espinhaço orogeny. The center of mass estimated for this body is about 20 km. In the magnetic case, we used an anomaly produced by a seamount located in the Gulf of Guinea. Based on the inversion results, we estimate a magnetic palaeopole for the seamount at 50°48'S and 74°54'E and we suggest that no important magnetization contrast exists below the bottom of the seamount.Item Acesso aberto (Open Access) Mapeamento do relevo do embasamento de bacias sedimentares através da inversão gravimétrica vinculada(Universidade Federal do Pará, 1998-03-02) BARBOSA, Valéria Cristina Ferreira; MEDEIROS, Walter Eugênio de; http://lattes.cnpq.br/2170299963939072; SILVA, João Batista Corrêa da; http://lattes.cnpq.br/1870725463184491We present three new stable gravity inversion methods to estimate the relief of an interface separating two media. Solution stability is attained by introducing a priori information about the interface, through the minimization of one (or more) stabilizing functional. These methods are, therefore, characterized by the physical and geological information incorporated to the problem. The first method, named global smoothness, estimates the depths to the interface at discrete points by assuming that the density contrast between the media is known. To stabilize the inverse problem, we introduce two different constraints: (a) proximity between the true and estimated interface depths at a few isolated points, and (b) proximity between the estimated depths at adjacent points. The combination of these two constraints impose a uniform degree of smoothness all over the estimated interface, minimizing, simultaneously, the misfit between the known and estimated depths at a few boreholes, for example. The second method, named weighted smoothness, estimates the interface depths at discrete points, assuming that the density contrast is known a priori. In this method, it is incorporated the information that the interface is smooth almost everywhere, but at a few fault discontinuities. To incorporate this attribute to the estimated relief, we developed an iterative process where three kinds of constraints are imposed on parameters: (a) weighted smoothness between values of adjacent parameters, (b) lower and upper bounds on the estimated depths, and (c) proximity between the values of the parameters and a known numerical value. Starting with an initial solution produced by the global smoothness method, this method enhances initially estimated geometric features of the interface; that is, flat areas will tend to become flatter and steep areas will tend to become steeper. This is accomplished by weighting the constraints which require proximity between adjacent parameters. The weights are updated at each iteration so as to enhance the discontinuities detected in a subtle way by the global smoothness method. Constraints (b) and (c) are used both to compensate for the decrease in solution stability due to the introduction of small weights, and to reinforce flatness at the basin bottom. Constraint (b) imposes that any depth be nonnegative and smaller than an a priori known maximum depth value whereas constraint (c) imposes that all depths be closest to a value deliberately violating the maximum depth. The trade-off between these conflicting constraints is attained with a final relief presenting fiat bottom and steep borders. The third method, named minimum moment of inertia, estimates the density contrasts of a subsurface region discretized into elementary prismatic cells. It incorporates the geological information that the interface to be mapped encompasses an anomalous source which besides presenting horizontal extents much larger than its largest vertical extent, exhibits bordes dipping either vertically or toward the center of mass, and that most of the anomalous mass (or mass deficiency) is concentrated, in a compact way, about a reference level. Conceptually, these information are introduced through the minimization of the moment of inertia of the anomalous sources with respect to a reference level coinciding with the mean topographic surface. This minimization is performed in a subspace of parameters consisting of compact sources and presenting bordes which dip either vertically or toward the ce4ter of mass. Effectivelly, these informations are introduced by means of an iterative process starting with a tentative solution dose to the null solution, and adds, at each iteration, a contribution which has minimum moment of inertia with respect to the reference level, in such a way that the estimate of the next iteration does not violate the bounds on the density contrast and minimizes, at the same time, the misfit between the observed and the fitted data. Additionally, the iterative process "freezes" a density estimate if it becomes very dose to either bound. The final solution at the end of the iterative process is an estimated solution exhibiting a compact mass distribution concentrated about the reference level, whose density contrast distribution is dose to the upper (in absolute value) bound established a priori. All three methods were applied to synthetic and field gravity data, produced, respectively, by simulated and real sedimentary basins. The global smoothness method produced a good reconstruction of the basin structural framework even when the true basements were not globally smooth, as was the case of the Recôncavo Basin, Brazil. This method presents, however, the lowest resolution as compared with the other two methods. The weighted smoothness method improved the resolution of basements presenting disontinuities produced by gravity faults with large vertical offsets. It is, therefore, potentially useful in interpreting the structural framework of extensional basins as illustrated both with synthetic data and data from the Steptoe Valley, Nevada, USA and from Recôncavo Basin, Brazil. The minimum moment of inertia method was also applied to synthetic data and data from Recôncavo Basin and from San Jacinto Graben, California, USA. The results showed that, as compared with the other two methods, this method produces excellent estimates of a basement relief consisting of several adjacent discontinuities with small vertical offsets. This is a remarkable advantage over the weighted smoothness method which requires that the interface present few, local discontinuities with large vertical offsets.Item Acesso aberto (Open Access) Uma nova abordagem para interpretação de anomalias gravimétricas regionais e residuais aplicada ao estudo da organização crustal: exemplo da Região Norte do Piauí e Noroeste do Ceará(Universidade Federal do Pará, 1989-12-18) BELTRÃO, Jacira Felipe; HASUI, Yociteru; http://lattes.cnpq.br/3392176511494801; SILVA, João Batista Corrêa da; http://lattes.cnpq.br/1870725463184491Despite its great importance to the study of global geologic structures, interpreting gravity anomalies is not a trivial task because the observed gravity field is the resultant of every gravity effect produced by every elementary density contrast. Therefore, in order to isolate the effects produced by shallow sources from those produced by deep sources, I present a new method for regional-residual separation and methods for interpreting each isolated component. The regional-residual separation is perfomed by approximating the regional field by a polynomial fitted to the observed field by a robust method. This method is iterative and its starting value is the least-squares fitting. Also, the influence of observations containing substantial contributions of the residual field in the regional field fitting is minimized. The computed regional field is transformed into a map of vertical distances relative to a given datum. This transformation consists of two stages. The first one is the downward continuation of the regional field which is assumed to be produced by a smooth interface separating two homogeneous media: the crust and the mantle. The density contrast between the media is presumably known. The second stage consists in transforming the downward continued field into a map of vertical distances relative to a given datum by means of simple operations. This method presents two difficulties. The first one is related to the instability inherent to the downward continuation operation. The use of a stabilizer is therefore mandatory, leading to an inevitable loss of resolution of the features being mapped. The second difficulty, inherent to the gravity method, is the impossibility of determining the interface absolute depths. However, the knowledge of the absolute depth at one single point of the interface by independent means allows the computation of all absolute depths. The computed residual component is transformed into an apparent density map. This transformation consists in calculating the intensity of several prismatic sources by linear inversion, assuming that the real sources are confined to a horizontal slab and have density contrasts varying only along the horizontal directions. The performance of the regional-residual separation method was assessed in tests using synthetic data, always producing better results as compared either with polynomial fitting by least-squares or with the spectral analysis method. The method for interpreting the regional component was applied to synthetic data producing interfaces very close to the true ones. The limit of resolution of the features being mapped depend not only on the degree of the fitting polynomial, but also on the limitation imposed by the gravity method itself. In interpreting the residual component, a priori information is needed about the depth and thickness of the slab confining the true sources. However, results of tests using synthetic data showed that reasonable estimates for the h6rizontal limits of the sources can be obtained, even when the depth and thickness of the slab are not known. The ambiguity involving depth to the top, thickness and the apparent density can be visualized by means of curves of apparent density as a function of the presumed depth to the top of the slab, each curve corresponding to a particular assumed value for the slab thickness. An analysis of the configuration of the curves allows a semi-quantitative interpretation of the real sources depths. The sequence of all three methods described above was applied to gravity data from northern Piauí and northwestern Ceará state. As a result, a crustal organization model was obtained consisting of crustal thickenings and thinnings related to a compressive event which caused the raise of dense, lower crust rocks to shallower depths. This model is consistent with surface geological information. Also, the .gravity interpretation suggests the continuity of the Northwestern Ceará Shear Belt for more than 200 km under the Parnaíba Basin sedimentary cover. Although the sequence of methods presented here has been developed for the study of large scale crustal structures, it could also be applied to the interpretation of smaller structures, as, for example, the basement relief of a sedimentary basin where the sediments have been intruded by mafic rocks.