Instrumentation and software for electrical imaging are evolving rapidly, as are the applications to hydrogeology. Advances in multi-channel systems now allow for the collection of large 3-D datasets from systems involving hundreds of electrodes, and advances in computing and inversion software allow analysis of such datasets on desktop computers, which opens up possibilities for these methods to be used in more hydrogeologic systems.
Despite these advances, experimental design and effective data analysis require considerable care and scientific insight to ensure meaningful results and interpretation for hydrogeologic processes and parameters. Practitioners are faced with numerous choices for hardware settings (e.g., stacking error cutoffs, applied current, pulse duration) and inversion settings (e.g., regularization, measurement weights), many of which can strongly affect reconstructed images. In cases where quantitative information is to be extracted from tomograms (i.e., rock physics models are applied to tomograms, estimation of hydrologic or geochemical processes changing through time), careful selection of inversion parameters is critical. Data overfitting can result in spurious structures and unrealistic estimates of geophysical and, thus, unrealistic hydrogeologic parameters of interest. On the other hand, data underfitting may result in tomograms that underpredict the degree of spatial variability and, thus, the variability of hydrogeologic parameters. Although multiple strategies to prevent overfitting and underfitting (e.g., L-curve, Occam inversion, and GCV) are discussed in the literature, application of these techniques is less common in practice and not supported by all commercially available electrical imaging inversion software. Because inversion settings can strongly affect resulting tomograms, it is important to document and justify choices. Without such documentation, reproduction of results is problematic.
The goal of this book is to demonstrate and document best practices for electrical imaging data collection and analysis for hydrogeology students and practitioners. In summary, we provide guidelines in seven areas:
- Survey geometry design: Numerical modeling, i.e., synthetic experiments or ‘pre modeling’ (e.g., Terry et al., 2017), should be employed prior to collecting geophysical field data to determine the best survey geometry, based on a best estimate of subsurface heterogeneity or processes of interest and the amount of time available to collect data (a particular issue when monitoring time-lapse processes, where temporal smearing will need to be minimized). Note that the maximum offset between electrodes is limited by the power of the electrical imaging unit and the unit’s ability to inject sufficient current to achieve a good signal-to-noise ratio. Good quadripoles can be selected, in part, by choosing geometries where the geometric factor is small. If boreholes are to be used, they should be spaced such that electrode strings are at least 1.5 times as long vertically as their horizontal separation distance. Survey geometries should capitalize on the sensitivity of each measurement and maximize the coverage of the tomogram.
- Standard procedures for data collection: Data collection and quality assurance and control should be documented using standardized forms and procedures that include how field equipment was set up and deployed, electrode locations, how errors and topography were measured in the field, weather, battery voltages, filenames, contact resistances and the locations of any infrastructure that could influence the measurements.
- Quantification of measurement error: Stacked (at minimum), reciprocal, and/or repeated errors should be collected in the field to assess the quality of the data, inform the editing of datasets, and calculate minimum measurement weights for the inversion.
- Selection of inversion parameters: Existing data and (hydro)geologic insight should be used to inform selection of inversion parameters or to develop prior information to inform the inversion process, like imposing known layers or contacts. For 2-D datasets involving multiple planes, it is useful to apply multiple approaches to data from one plane, compare results, and design a consistent approach to inversion for data from across the site. Investigation of alternative inversion settings can aid in distinguishing artifacts from hydrogeologic features.
- Checks on inversion results: Tomograms should be evaluated for likely inversion artifacts and the effects of bad data. The practitioner should look at the range of estimated electrical conductivity values for plausibility; pixelated (checkerboard) appearance of tomograms; artifacts such as streaking, anomalous blocks, or diagonal patterns; and goodness of fit and convergence of the inversion.
- Resolution assessment: Results of tomograms should be compared with the sensitivity and/or resolution matrix to assess the general quality of the tomogram.
- Comparison to other information: Tomograms and our interpretations of them should be considered given existing information, including hydrogeologic maps, lithology, borehole logs, hydraulic tests, and even tables of expected properties (given the wealth of published ER and IP data that exist for both unconsolidated and consolidated materials). Borehole logs can provide information to help interpret features seen in tomograms and to help correlate estimated electrical properties with lithology and/or hydraulic properties.