# The margin of error using normal distribution

QUESTION 39
When s is used to estimate s, the margin of error is computed by using
normal distribution
t distribution
the mean of the sample
the mean of the population
1 points
QUESTION 40
For a lower (left) tail test, the p-value is the probability of obtaining a value for the test statistic

at least as small as that provided by the sample

at least as large as that provided by the sample

at least as small as that provided by the population

at least as large as that provided by the population.
1 points
QUESTION 41
For a one-tailed hypothesis test (upper tail) the p-value is computed to be 0.034. If the test is being conducted at 95% confidence, the null hypothesis
could be rejected or not rejected depending on the sample size
could be rejected or not rejected depending on the value of the mean of the sample
is not rejected
is rejected
1 points
QUESTION 42
For a two tail test, the p-value is the probability of obtaining a value for the test statistic

more extreme than that provided by the sample

less extreme than that provided by the sample

more extreme than that provided by the population

less extreme than that provided by the population
1 points
QUESTION 43
If a hypothesis is not rejected at the 5% level of significance, it
will also not be rejected at the 1% level
will always be rejected at the 1% level
will sometimes be rejected at the 1% level
None of these alternatives is correct.
1 points
QUESTION 44
If a hypothesis is rejected at 95% confidence, it

will always be accepted at 90% confidence

will always be rejected at 90% confidence

will sometimes be rejected at 90% confidence

None of these alternatives is correct.
1 points
QUESTION 45
If a hypothesis is rejected at the 5% level of significance, it
will always be rejected at the 1% level
will always be accepted at the 1% level
will never be tested at the 1% level
may be rejected or not rejected at the 1% level
1 points
QUESTION 46
In hypothesis testing if the null hypothesis has been rejected when the alternative hypothesis has been true,
a Type I error has been committed
a Type II error has been committed
either a Type I or Type II error has been committed
the correct decision has been made
1 points
QUESTION 47
In hypothesis testing if the null hypothesis is rejected,

no conclusions can be drawn from the test

the alternative hypothesis is true

the data must have been accumulated incorrectly

the sample size has been too small
1 points
QUESTION 48
In hypothesis testing, the tentative assumption about the population parameter is

the alternative hypothesis

the null hypothesis

either the null or the alternative

none of these alternatives is correct
1 points
QUESTION 49
The p-value

is the same as the Z statistic

measures the number of standard deviations from the mean

is a distance

is a probability
1 points
QUESTION 50
A least squares regression line

may be used to predict a value of y if the corresponding x value is given

implies a cause-effect relationship between x and y

can only be determined if a good linear relationship exists between x and y

None of these alternatives is correct.
1 points
QUESTION 51
E( uiXi) = 0 says that
the sample mean of the Xs is much larger than the sample mean of the errors.
the conditional distribution of the error given the explanatory variable has a zero mean.
the sample regression function residuals are unrelated to the explanatory variable.
dividing the error by the explanatory variable results in a zero (on average).
1 points
QUESTION 52
In regression analysis, the independent variable is

used to predict other independent variables

used to predict the dependent variable

called the intervening variable

the variable that is being predicted
1 points
QUESTION 53
In regression analysis, the variable that is being predicted is the

dependent variable

independent variable

intervening variable

is usually x
1 points
QUESTION 54
In the simple linear regression model Yi = β 0 + β 1 Xi + ui,
β0 + β1Xi represents the sample regression function.
the intercept is typically small and unimportant.
β0 + β1Xi represents the population regression function.
the absolute value of the slope is typically between 0 and 1.
1 points
QUESTION 55
Larger values of R2 imply that the observations are more closely grouped about the

average value of the independent variables

average value of the dependent variable

least squares line (the sample regression line)

origin
1 points
QUESTION 56
Regression analysis is a statistical procedure for developing a mathematical equation that describes how

one independent and one or more dependent variables are related

several independent and several dependent variables are related

one dependent and one or more independent variables are related

None of these alternatives is correct.
1 points
QUESTION 57
The assumption of homoskedasticity implies

The independent (explanatory ) variable X is uncorrelated with the error term u.

The conditional distribution of u given X has mean zero

The variance of each u is constant for different values of X

There is no correlation between the error terms of different observations.
1 points
QUESTION 58
The assumption of no autocorrelation implies

The independent (explanatory ) variable X is uncorrelated with the error term u.

The conditional distribution of u given X has mean zero

The variance of each u is constant for different values of X

There is no correlation between the error terms of different observations.
1 points
QUESTION 59
The least squares estimator of the slope is unbiased means

The estimated slope coefficient will always be equal to the true parameter value.

The estimated slope coefficient will get closer to the true parameter value as the sample size increases.

The mean of the sampling distribution of the slope parameter is zero.

If repeated sample of the same size are taken, on an average their value will be equal to the true parameter.
1 points
QUESTION 60
The regression R 2 is a measure of
the goodness of fit of your regression line.
whether or not ESS> TSS.
whether or not X causes Y.
the square of the determinant of R.
1 points
QUESTION 61
The regression model includes a random error term for a variety of reasons. Which of the following is NOT one of them?

Measurement errors in the observed variables.

Omitted influences on Y.

Linear functional form is only an approximation.

There may be approximation errors in the calculation of the least squares estimates.
1 points
QUESTION 62
A least squares regression line

may be used to predict a value of y if the corresponding x value is given

implies a cause-effect relationship between x and y

can only be determined if a good linear relationship exists between x and y

None of these alternatives is correct.
1 points
QUESTION 63
A multiple regression model has

only one independent variable

more than one dependent variable

more than one independent variable

at least 2 dependent variables
1 points
QUESTION 64
A multiple regression model has the form Y with hat on top equals 7 plus 2 X subscript 1 plus 9 X subscript 2
As X subscript 1 increases by 1 unit (holding X subscript 2 constant), Y with hat on top is expected to

increase by 9 units

decrease by 9 units

increase by 2 units

decrease by 2 units
1 points
QUESTION 65
In multiple regression analysis,

there can be any number of dependent variables but only one independent variable

there must be only one independent variable

the coefficient of determination must be larger than 1

there can be several independent variables, but only one dependent variable
1 points
QUESTION 66
When there are omitted variables in the regression, which are determinants of the dependent variable, then
the OLS estimator is biased if the omitted variable is correlated with the included variable.
this has no effect on the estimator of your included variable because the other variable is not included.
this will always bias the OLS estimator of the included variable.
you cannot measure the effect of the omitted variable, but the estimator of your included variable(s) is (are) unaffected.
1 points
QUESTION 67
If you had a two regressor regression model, then omitting one variable which is relevant
makes the sum of the product between the included variable and the residuals different from 0.
will always bias the coefficient of the included variable upwards.
will have no effect on the coefficient of the included variable if the correlation between the excluded and the included variable is negative.
can result in a negative value for the coefficient of the included variable, even though the coefficient will have a significant positive effect on Y if the omitted variable were include 