Earthquake is one of the most multifaceted phenomenons which we are facing since the beginning of our evolutionEarthquake is one of the most multifaceted phenomenons which we are facing since the beginning of our evolution

Earthquake is one of the most multifaceted phenomenons which we are facing since the beginning of our evolution. Even now it is virtually impossible to forecast an earthquake exactly. The much hype around this expected calamity is due to uncountable loss of lives and demolition of structures it has caused. Several statistical and probabilistic models are being developed to estimate the size, time and location of upcoming earthquakes. However, engineers do not care much about the magnitude of an earthquake, which is basically the amount of energy released at the focus, but the ground motion parameters (GMPs) like intensity, peak ground acceleration (PGA), spectral acceleration etc., also now as the hazard parameters. The response of a building, bridge, dam or any other engineering structure to an earthquake depends directly on these ground motion parameters. Several ground motion parameter equations or attenuation relationships are developed by seismologists to estimate the ground motions for a region or site based on several factors like hypo-central distance, site condition, seismicity of the nearby sources etc. Generally, Seismic hazard assessment is carried out for estimating the ground motion exceedance rate in a particular time period. Traditionally, two basic types of seismic hazard assessments methodologies are followed: deterministic seismic hazard assessment (DSHA) and probabilistic seismic hazard assessment (PSHA). DSHA is a highly conservative method and is used for the design of structures which can cause severe hazard on collapse like dams, nuclear power plants etc. DSHA is mainly based on a single worst case scenario ground motion consideration. Whereas, PSHA incorporates all types of uncertainties associated with the earthquake location, occurrence, magnitude etc. and is considered economical and less conservative model. PSHA results are widely used for designing any type of structures. In this investigation probabilistic seismic hazard assessment has been carried out for the National Capital
Region (NCR) of India. NCR is one of the most densely populated and highly industrialized regions in India. As per IS 1893 (Part: 1) : 2002, most part of NCR including capital region of New Delhi falls under zone IV in the seismic zoning map of India and rest parts in zone III, which makes it important for such types of studies due to its vulnerability to major earthquakes.
Study Area
The area of study (Fig. 1) comprises of three states Uttar Pradesh, Haryana, Rajasthan and National Capital Territory (NCT) Delhi (Table 1) consists of 23 districts with a total area of around 58332 square km. The area is bounded by Karnal in the north, Data sources and data processing
Earthquake Sources Identification and Characterization

This step of identification as well as characterization of earthquake sources involved the definition of seismic sources and their potential. For this purpose, either line (i.e. fault) or area sources were used for modeling. Quetta is situated near many active faults and it was assumed to contain geological faults as the seismic sources. Faults model adopted for the study based upon the faults presented by many sources, especially the National Geo-data Centre, GSP (Geological Survey of Pakistan) and Khan et al. (2003). At present, a number of methods were available for assigning a maximum magnitude to a given tectonic fault. These methods are based upon empirically derived correlations between magnitude and key parameters of faults such as fault displacement, fault rupture length, rupture area etc. Geological as well as seismological studies describe these fault parameters. The results of field studies of tectonic features in an area provide the data on fault rupture length, and fault displacement. The most useful regression relations involving magnitude and fault displacement fault rupture length or rupture area, were those given by Bonilla et al. (1984), Slemmons et al. (1989) alongwith Wells and Coppersmith (1994). For the fault characteristic model, the maximum magnitude of the fault was calculated by taking 50% fault length rupture. The Chaman Fault yielded maximum magnitude potential Mw = 8.3. Results of this step concerning maximum magnitude potential assigning were given in the Table 3.1.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now

Source-to-Site Distance
The step of determination of source-site distance involved the allocation of shortest distance from a seismic source and a site under study. For the application of predictive empirical relationships in the next step, maximum magnitude potential and shortest distances from the causative sources to the site were used. Based on the field studies of the faults and interpretation of local seismicity, the shortest distance was assigned to the causative sources for evaluation of PGA. The source-to-site distances, represented by the minimum between any part of each source and Quetta were given in Table 3.2.
Earthquake Catalogue
The very first step to carry out a seismic hazard analysis of any region is to make suitable earthquake catalogue for that region. An earthquake catalogue gives us information about historical and recent earthquake magnitude, focus (focal depth) and time. Earthquake data for the study area has been extracted from several agencies like United State Geological Survey (USGS), International Seismological Center (ISC), India Meteorological Department (IMD) New Delhi and also several publications like IYENGAR et al., 1999. While forming the catalogue it is very important that it should be homogeneous i.e. all the magnitude values taken from various sources have the same scale. If the data consists of different magnitude scale, then they are converted to same scale using conversion relationships. This process is known as homogenization of an earthquake catalogue. In this study, I have considered the moment magnitude scale for all computations.
Completeness of Catalogue
Due to lack of proper instruments and networking in earlier times, many earthquakes data were not recorded. If the degree of incompleteness is very high then the analysis carried out will not give accurate results. Hence it is important to know the degree of completeness of an
earthquake catalogue before doing any further analysis. The most famous method to check the completeness of an earthquake catalogue is method described by STEPP (1972). For the considered region the whole dataset were divided into five groups: M