Cobb-Douglas the amount of output to be generated

Cobb-Douglas production function has been recognized by majority of economics research and study as a kind of production functional form, which is widely used for the most part to represent the relationship technologically between the amounts of two or more than two of inputs, mostly physical capital and labor, and the amount of output to be generated using those inputs. The Cobb-Douglas form originators are Charles Cobb and Paul Douglas amid the period of year from 1927 to 1947.G1 

The most norm form of Cobb-Douglas production function is expressed as:

Y=AL?Ka

Where:

Y= total production (the nominal value revised for inflation of all goods in a year or 365.25 days)

A= total factor productivity and utility depreciation within a day

L= labor input (the total number of person-hours worked in year or 365.25 days)

K= capital input (the real value of buildings, equipment, and machinery)G2 

a and ?= the elasticity of output of capital and labor

 Cobb-Douglas production function can be presented either in the form of short run or long run production function. From the perspective of the economist, the short run is explained as a short time horizon and input such as capital is needed to be fixed. While, the long run production function is referred to a long time horizon with input that is not necessary to be fixed (Coelli, 1997). G3 G4 

Yet Balistreri (2003) had a debate about certain limitations for the cobb-douglas mode. We all know the capital-labor measuring is the driver of structural change, however it is existed with some cons and is agonistic among the economic field. From the perspective in terms of structural, the progress of capital accumulation confronts a complicated dynamic issue, thus causing misspecification error occurs within the estimation of capital input with the static notion as its foundation.G5 

 

 

3.7 Translog Production Function

Occurrence of translog production functions in those studies with the concern of G6 discovery and definition of new forms of production function discovering and defining in ways more flexible as well as on the affinity of CES production function. In fact, the priority of applying the form of translog production may go to J. Kmenta’s proposal established in 1967 which was about the approximation of the CES production function alliance with the second order of Taylor series referring that during the figure of the elasticity of substitution and the unitary value are close to each other, is the case regardings Cobb-Douglas production function. The above form of production function can be denoted as:

where:

ln= natural logarithm

Y= Output (Gross domestic product)

K= Fixed capital

L= Employed population.

A3, ?3, ?3, ?3 are parametres to be estimated.

Two new modus of production function was raised by G7 Grilichs and Ringstad in the year of 1971. The establishment of the first form of production function applied the assumption that ?+?=1. ThusG8 , a matter of course the production function transformed into a labor production function:G9 

Obviously, G10 the above-mentioned function belongs to logarithms under the second order polynomial category with consideration of inserting one input, capital-labor ratio, respectively. While the other production function was given with definition under the condition of loosening the constraints exerted to the parameters of function proposed by Kmenta with the aim to examine the assumptions of homotheticity and is in the form as:G11 

 

As a matter of fact, the above-mentioned production function was also applied by Sargant within the year 1971, and the function was named a log-quadratic one. It is necessary to notice that the compendious term “transcendental logarithmic production function” called “translog production function”, raised by Christiansen, Jorgenson and Lau in two research papers of published years in 1971 and 1973 respectively, which was applied to handle matters such as strong additivity, homogeneity of both Cobb-Douglas and CES production function and each of their implication on the production frontier separately. The common form of the traG12 nslog production function that takes multiple inputs into consideration including of factors from the production side isG13  denoted as below:G14 

The translog production functions symbolize of truth a category of more flexible functional forms contributing to those production functions (Ch. Allen, St. Hall, 1997). One regarding the principal benefits on the respective production function states that unlike supposing of Cobb-Douglas production function, such does now not assume firm premises certain as: ideal or “smooth” substitution of production actors or formality opposition concerning the production factors market (J. Klacek, et al., 2007). Also, the idea of tG15 he translog production function approves in conformity with bypass beyond a linear relationship into the output then the production factors, as are made among account toward a nonlinear one. Because of its properties, the translog production function is applicable for second-orderG16  approximation concerning a linear-homogenous production, the determination about the Allen elasticity regarding substitution, the determination about the production frontier yet the measure of the aggregation factor productiveness dynamics.G17 

3.8 Random Effect Model (REM)

Error term within the random effect model is deemed to be independently allotted with the regressors. The measurement of variation for REM estimator collects data from both time and cross-sections. From the random effects model, the individual-specific effect represents a random variable that has zero correlation with the independent variables throughout all time periods from past to future within one individual.G18 G19 G20 

When the sample size from REM is small, the pooled OLS estimator is unbiased, however, there is no way to build RE estimator properties of the small sample. The RE estimator is gradually getting normal distribution in the sense that when the individual numbering is more G21 G22 than one, even with time fixed. The test usually applied to examine REM is the Lagrange Multiplier (LM) test which it is distinct from the FEM. Assumption is applied on REM that the intercepts and slopes are constant, on the other words, the correlation between the error term and any regressor should be zero due to no interception lied from the difference among the groups yet intercepts exist in their variance of the error term. REM is afterward applied generalized least squares while the variance structure in groups is wised up. REM equation is as shown like below.G23 G24  G25 G26 G27 G28 

,                        

 

 ,                                                                

 

    ? (0, )

 

    ? (0, )

For i = 1, 2,…, N, and t = 1, 2,…, T. µ and ? are mutually independent.

Refer upon to the equation, an assumption is established stated that an individual unit of its intercept is stochastically picked from a much greater size of the population with the condition of constant mean value. The intercept itself is deemed as a deviation in the constant mean value. This function is feasible and applicable in the condition where correlation does not exist between the intercept of each cross-sectional unit and the predictor variables. G29 G30 G31 G32 G33 

3.8.1 Fixed Effect Model (FEM)

Fixed effect model (FEM) disserts the equation’s error term as one variable that has a certain section of correlation with the observed regressors. We made assumption saying that the time-varying controlled variables within the FEM do not have perfect G34 collinearity, that they have non-zero within-variance (i.e. variation over time for a given individual) and without many extreme values. Thus, a constant or any time-invariant variables are strictly not allowed to fit into the explanatory variables. Moreover, FEM does examination over the differences of group in the intercepts, with the assumption stated that the identical slopes and constant variance across entities group which has put the error into account are allowable to have the correlation to the other regressors. Application of least square dummy variable (LSDV) and within effect estimation methods was carried out in FEM. In addition, the incremental F test has been used to test the FEM, variances among the error term remain constant while the intercept is changing across groups and/ or overtime. Several opportunities and obstacles could be found from the FEM. For the pros side, the consistent estimation occurs in the case that the omitted heterogeneity has a correlation with regressors. FE estimators will get the bias arise from omitted variable solved. While on the cons aspect, the time-invariant regressors are failed to capture a tremendous amount of information, which would lead to fewer degrees of freedom, thus when comes to priority setting for suitability and feasibility application of estimators, FE estimators should not be the significant option unless there is no other better estimator.G35 G36 G37 G38 G39 G40 G41 G42 G43 G44 G45 G46 G47 G48 

The parameter type of fixed effect model comprises of incidental parameters. Majority of writers address a parameter incidental in the condition while there is an increase occurs from its dimension by adding up the sample size. The burden occurs as such a parameter is worse than for a representative nuisance parameter whose dimension may remain the same. In FEM, contradiction is formed towards REM saying that it is applicable when there is an occurrence of correlation between the (random) intercept of each cross-sectional unit and the regressors.G49 G50 

 

Cor (

 

Let    denote a dummy variable that is 0 for all observations it with i j and 1 for i = j.

Then, convening = ? and µ = ( ‘,

 

  i= 1, …, N, t=1,…, T                             

 

The mentioned-above equation illustrated the least-squares dummy variables (LSDV) within or the FE estimator. The regression meets all the requirements under the Gauss-Markov Theorem consisting of assumption such as explanatory variable, X as non-stochastic, zero unbiasedness in LSDV, consistent, and linear efficient with a short form named as “BLUE”.G51 

 

3.9 Panel unit root test

Panel unit root testing was originated from time series unit root testing which is applied to the study whether the equation function’s variable belongs to non-unit root or non-stationary. Based on Kunst, Nell and Zimmermann (2011), 5 types of panel unit root tests have been introduced in their research paper including of Levin-Lin-Chu test (LLC),G52  Im, Pesaran and Shin Test (IPS), Breitung’s test, Fisher- type ADF and PP test, and Hadri test. These tests are categorized into two groups. The first group consists of LLC, IPS, Fisher-ADF test and Breitung test. These tests are applied for the estimation of the regression with lagged difference terms. On the other hand, the other group has Hadri test, LLC, and Fisher- PP test which is performed by the researcher for the regression estimation that implicates the kernel weighting.G53 G54 G55 G56 

Panel unit root tests own greater power than time series unit root test does. With the inclusion of heterogeneous cross-section data leads to the enhancement of time series information and distribution normality of limit statistical. It is necessary to ensure that the variables included in the function equation are stationary, thus could carry on to panel cointegration test as it could help to preclude spurious regression issue. In the case that when two variables are striking over the time, high RG57 2 may occur from one regression to another even if the variables have absolutely no relationship with each other. It is provable that the standard assumptions for asymptotic analysis will be invalid if the variables inside the regression are non-stationary. On the other way around, the ordinary “t-ratios” will not comply with what t-distribution does, thus hypothesis tests for the parameters are unable to perform (RG58 2 > Durbin-Watson test statistic).

The AR(1) could be recapitulated to several examples as shown below:

Yt = ? + ?Yt-1 + ?t

Where,

    When ? >1, Yt is no unit root process (a non-stationary process- an explosive process)

    When ? =1, Yt is a unit root process (a non-stationary process- a random walk process)

    When ? <1, Yt is not a unit root process (a stationary process) {displaystyle y_{t}=D_{t}+z_{t}+varepsilon _{t}} The definition of null hypothesis is familiarly expressed as the existence of a unit root and the alternative hypothesis can be either stationarity, trend stationarity and explosive root relying on which test to be applied. On the whole, the implicit assumption of approach to unit root test that the time series to be examined {displaystyle y_{t}_{t=1}^{T}}Yt can be expressed as, Yt = Xt + Zt + ?t Where,      Xt - the deterministic component (trend, seasonal component, etc.)      Zt - the stochastic component.      ?t - the stationary error course. {displaystyle y_{t}=D_{t}+z_{t}+varepsilon _{t}} {displaystyle D_{t}}AuumAAjdfjfAssumption stated that the null hypothesis as a stationary series that has a strong influence towards its behavior and nature. While the model is with the unit root nature, at the moment it indicates that it is a non-stationary model. Thus, it helps to proceed to next stage-cointegration techniques by carrying out the variable in the primary differences, stationary I(1) series. Both the LLC and the IPS were applied to the research paper.   3.9.1 Im, Pesaran and Shin (IPS) The Im, Pesaran and Shin (IPS) test have been chosen to apply in this study, where the Dickey-Fuller procedure is the foundation for the IPS test.G59 G60  Im, Pesaran and Shin indicating IPS carried out a proposal for a test for the existence of unit roots in panels that forms an integration of information from time series dimension and cross-section dimension, thus less number of time observation is needed in the test to gain the validity or power. As the IPS test has the superior test power that has been examined and verified by that economic field of researchers to conduct the long-run relationship analysis based from G61 G62 the panel data, hence this procedure has been brought in to this study. IPS starts by focusing on a separate ADF regression for each cross-section together with their individual effect respectively without any time trend:G63                                                                               where  i = 1, . . .,N and t = 1, . . .,T Both the null and alternative hypotheses have their own respective definition as demonstrated below: H0 : ?ij = 0 H1 : ?ij < 0 Hence, all series in the panel of the null hypothesis are nonstationary procedures; for the alternative hypothesis, a part of the series in the panel is assumed to be stationary. This is in contrary to the LLC test, assuming thatG64 G65  for all series of alternative hypothesis are stationary. The errors are with the presumption of serial autocorrelation, G66 properties of distinct serial correlation and inconstant variances for all the units. The group–mean Lagrange multiplier statistic was proposed by the IPS for the null hypothesis testing.G67 G68 G69  They captured the N cross-section units by applying the separate unit root tests. The Augmented Dickey-fuller (ADF) statistics is the foundation of the test that is averaged over groups. After the estimation on the separate ADF regressions, the t-statistics average for  obtained from the individual ADF regressions, G70                                                                                                            (4) As when the standardization occurred in t-bar, showing that it converges the standard normal distribution as N and T to infinity, ?. A demonstration raised by IPS (1997) stated that a better performance from t-bar test with small values of N and T. As a general time-specific component is in the presence in error of different regressions, IPS raised a cross-sectionally demeaned version for both of the tests. 3.9.2 Levin-Lin-Chu Test (LLC) Assumption is set up stating that a general panel unit root process is existed in LLC tests, implying that the autoregressive coefficients across all the cross sections remain constant. Limited power is with the individual unit root tests. The definition of power that a test has is expressed as the probability to reject the null hypothesis when there is not enough evidence at the same time the null is a unit root. It happens while we have a surplus amount of unit roots.G71 G72 G73 G74  ADF regression is deemed as the foundation for LLC test:             +  + t +               where i = 1, . . .,N and t = 1, ….,T Levin-Lin-Chu Test (LLC) proposes the hypotheses as shown below: H0: every time series contains a unit root H1: every time series is stationary As refer to the series, both the individual effect ( ) and time trend ( t) are not merged. Lagged dependent variable and zero possibility of the homogeneous feature in every unit is a decisive source for the deterministic components' heterogeneity. Besides that, in accordance with Levin, Lin and Chu (2002), the error process (G75  ) is with an assumption of independent distribution from the individual and stick with the stationary invertible ARMA process for each of the individuals as:G76   = Levin-Lin-Chu test has an essential requirement which is ?NT/T ? 0, while su?cient conditions would be NT/T ? 0 and NT/T ? ?. (NT implies that the time dimension T has the cross-sectional dimension N as it's monotonic function.) Based on what authors say, the statistic has a great performance in the prerequisite that the value of N is in between of 10 and 250 while T is in the range of 5 and 250. The test will be less sizeable and with relatively low power when the value of T is too small. One of the cons for the test statistic stated that it is over-depending on the cross-sectional assumption, i.e. independence. Furthermore, all the cross sections have a unit root among their null hypotheses respectively which found to be extremely cabined, in the other words, the intermediate case is not allowed, where certain individuals are appurtenant to unit root property while some are not. Levin et al. (2002) recommended individual unit root time-series tests when the given T has a large value, while on the other side when the value of N is large, usual panel data procedures will be in the suggestion list to apply in the test 3.10 Panel Cointegration Tests The following measure is to conduct a test for the technical efficiency inG77  the United States of America airG78 G79 G80 G81 G82 G83 G84 JF85 line G86 as well as the independent variables by inserting the Pedroni (1999 and 2004) suggestion of applying the panel cointegration tests. Seven panel cointegration tests proposed by Pedroni (1999) are applied on the estimated residuals from the cointegration regression after the normalization of panel statistics with the correction terms, the procedures raised by Pedroni are made use of estimated residual taken from the hypothesized long-run regression which is expressed as follow: G87 G88 G89 G90 G91 G92 G93 G94                                       (1)                                               for t = 1,…..,T; i = 1,….,N; m = 1, …., M, where T indicates the number of observations across the time period, N represents the number of cross-sectional units within the panel, and M is the number of right-hand variables. Based on this setup,  is deemed as either the member specific intercept or fixed effects parameter which changes over all the individual cross-sectional units. Those remain constant are the true of the slope coefficients and the member-specific time effects,G95  . The proposal by Pedroni (1999 and 2004) stated that both the heterogeneous panel and heterogeneous group mean panel test statistics to execute the test for the panel cointegration.  He set definition to two sets of statistics respectively. These are the three statistics: G96  , and  which are all based on the residual pooling along within the dimension of the panel. The statistics are expressed as below:G97                                                                             (2)                                                           (3)                                                  (4)                   where is the residual vector of the OLS estimation of Equation (1) and where the other terms are properly defined in Pedroni. The next set of statistics is based on the residuals pooling along between the dimension of the panel. Privilege is given for a heterogeneous autocorrelation parameter over all the members. The statistics are shown as follow:                                                                                (5)                                                                           (6) All the statistics help to do the calculation for the mean in a group with the based on the individual conventional time series statistics. The distributions with high similarity for each of those five statistics are presented in one generalized form as:G98                                                                                       (7)                 where  represents the corresponding form with the test statistics, while and are the mean and variance for each of the tests respectively. Both the symbols are shown in Table 2 by Pedroni (1999). For the alternative hypothesis, Panel v statistics deviates to positive infinity. Hence, it is a one-tailed test and was in large values with a positive sign, rejecting the null with no cointegration. The rest of the statistics deviates to the negative direction of infinity, which implies that negative values are gradually increased, it would reject the null. G99