Cobb-Douglas the amount of output to be generated

Cobb-Douglas production function has been recognized by majority of economics research and study as a kind of production functional form, which is widely used for the most part to represent the relationship technologically between the amounts of two or more than two of inputs, mostly physical capital and labor, and the amount of output to be generated using those inputs. The Cobb-Douglas form originators are Charles Cobb and Paul Douglas amid the period of year from 1927 to 1947.G1 

The most norm form of Cobb-Douglas production function is expressed as:

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Y=AL?Ka

Where:

Y= total production (the nominal value revised for inflation of all goods in a year or 365.25 days)

A= total factor productivity and utility depreciation within a day

L= labor input (the total number of person-hours worked in year or 365.25 days)

K= capital input (the real value of buildings, equipment, and machinery)G2 

a and ?= the elasticity of output of capital and labor

 Cobb-Douglas production function can be presented either in the form of short run or long run production function. From the perspective of the economist, the short run is explained as a short time horizon and input such as capital is needed to be fixed. While, the long run production function is referred to a long time horizon with input that is not necessary to be fixed (Coelli, 1997). G3 G4 

Yet Balistreri (2003) had a debate about certain limitations for the cobb-douglas mode. We all know the capital-labor measuring is the driver of structural change, however it is existed with some cons and is agonistic among the economic field. From the perspective in terms of structural, the progress of capital accumulation confronts a complicated dynamic issue, thus causing misspecification error occurs within the estimation of capital input with the static notion as its foundation.G5 

 

 

3.7 Translog Production Function

Occurrence of translog production functions in those studies with the concern of G6 discovery and definition of new forms of production function discovering and defining in ways more flexible as well as on the affinity of CES production function. In fact, the priority of applying the form of translog production may go to J. Kmenta’s proposal established in 1967 which was about the approximation of the CES production function alliance with the second order of Taylor series referring that during the figure of the elasticity of substitution and the unitary value are close to each other, is the case regardings Cobb-Douglas production function. The above form of production function can be denoted as:

where:

ln= natural logarithm

Y= Output (Gross domestic product)

K= Fixed capital

L= Employed population.

A3, ?3, ?3, ?3 are parametres to be estimated.

Two new modus of production function was raised by G7 Grilichs and Ringstad in the year of 1971. The establishment of the first form of production function applied the assumption that ?+?=1. ThusG8 , a matter of course the production function transformed into a labor production function:G9 

Obviously, G10 the above-mentioned function belongs to logarithms under the second order polynomial category with consideration of inserting one input, capital-labor ratio, respectively. While the other production function was given with definition under the condition of loosening the constraints exerted to the parameters of function proposed by Kmenta with the aim to examine the assumptions of homotheticity and is in the form as:G11 

 

As a matter of fact, the above-mentioned production function was also applied by Sargant within the year 1971, and the function was named a log-quadratic one. It is necessary to notice that the compendious term “transcendental logarithmic production function” called “translog production function”, raised by Christiansen, Jorgenson and Lau in two research papers of published years in 1971 and 1973 respectively, which was applied to handle matters such as strong additivity, homogeneity of both Cobb-Douglas and CES production function and each of their implication on the production frontier separately. The common form of the traG12 nslog production function that takes multiple inputs into consideration including of factors from the production side isG13  denoted as below:G14 

The translog production functions symbolize of truth a category of more flexible functional forms contributing to those production functions (Ch. Allen, St. Hall, 1997). One regarding the principal benefits on the respective production function states that unlike supposing of Cobb-Douglas production function, such does now not assume firm premises certain as: ideal or “smooth” substitution of production actors or formality opposition concerning the production factors market (J. Klacek, et al., 2007). Also, the idea of tG15 he translog production function approves in conformity with bypass beyond a linear relationship into the output then the production factors, as are made among account toward a nonlinear one. Because of its properties, the translog production function is applicable for second-orderG16  approximation concerning a linear-homogenous production, the determination about the Allen elasticity regarding substitution, the determination about the production frontier yet the measure of the aggregation factor productiveness dynamics.G17 

3.8 Random Effect Model (REM)

Error term within the random effect model is deemed to be independently allotted with the regressors. The measurement of variation for REM estimator collects data from both time and cross-sections. From the random effects model, the individual-specific effect represents a random variable that has zero correlation with the independent variables throughout all time periods from past to future within one individual.G18 G19 G20 

When the sample size from REM is small, the pooled OLS estimator is unbiased, however, there is no way to build RE estimator properties of the small sample. The RE estimator is gradually getting normal distribution in the sense that when the individual numbering is more G21 G22 than one, even with time fixed. The test usually applied to examine REM is the Lagrange Multiplier (LM) test which it is distinct from the FEM. Assumption is applied on REM that the intercepts and slopes are constant, on the other words, the correlation between the error term and any regressor should be zero due to no interception lied from the difference among the groups yet intercepts exist in their variance of the error term. REM is afterward applied generalized least squares while the variance structure in groups is wised up. REM equation is as shown like below.G23 G24  G25 G26 G27 G28 

,                        

 

 ,                                                                

 

    ? (0, )

 

    ? (0, )

For i = 1, 2,…, N, and t = 1, 2,…, T. µ and ? are mutually independent.

Refer upon to the equation, an assumption is established stated that an individual unit of its intercept is stochastically picked from a much greater size of the population with the condition of constant mean value. The intercept itself is deemed as a deviation in the constant mean value. This function is feasible and applicable in the condition where correlation does not exist between the intercept of each cross-sectional unit and the predictor variables. G29 G30 G31 G32 G33 

3.8.1 Fixed Effect Model (FEM)

Fixed effect model (FEM) disserts the equation’s error term as one variable that has a certain section of correlation with the observed regressors. We made assumption saying that the time-varying controlled variables within the FEM do not have perfect G34 collinearity, that they have non-zero within-variance (i.e. variation over time for a given individual) and without many extreme values. Thus, a constant or any time-invariant variables are strictly not allowed to fit into the explanatory variables. Moreover, FEM does examination over the differences of group in the intercepts, with the assumption stated that the identical slopes and constant variance across entities group which has put the error into account are allowable to have the correlation to the other regressors. Application of least square dummy variable (LSDV) and within effect estimation methods was carried out in FEM. In addition, the incremental F test has been used to test the FEM, variances among the error term remain constant while the intercept is changing across groups and/ or overtime. Several opportunities and obstacles could be found from the FEM. For the pros side, the consistent estimation occurs in the case that the omitted heterogeneity has a correlation with regressors. FE estimators will get the bias arise from omitted variable solved. While on the cons aspect, the time-invariant regressors are failed to capture a tremendous amount of information, which would lead to fewer degrees of freedom, thus when comes to priority setting for suitability and feasibility application of estimators, FE estimators should not be the significant option unless there is no other better estimator.G35 G36 G37 G38 G39 G40 G41 G42 G43 G44 G45 G46 G47 G48 

The parameter type of fixed effect model comprises of incidental parameters. Majority of writers address a parameter incidental in the condition while there is an increase occurs from its dimension by adding up the sample size. The burden occurs as such a parameter is worse than for a representative nuisance parameter whose dimension may remain the same. In FEM, contradiction is formed towards REM saying that it is applicable when there is an occurrence of correlation between the (random) intercept of each cross-sectional unit and the regressors.G49 G50 

 

Cor (

 

Let    denote a dummy variable that is 0 for all observations it with i j and 1 for i = j.

Then, convening = ? and µ = ( ‘,

 

  i= 1, …, N, t=1,…, T                             

 

The mentioned-above equation illustrated the least-squares dummy variables (LSDV) within or the FE estimator. The regression meets all the requirements under the Gauss-Markov Theorem consisting of assumption such as explanatory variable, X as non-stochastic, zero unbiasedness in LSDV, consistent, and linear efficient with a short form named as “BLUE”.G51 

 

3.9 Panel unit root test

Panel unit root testing was originated from time series unit root testing which is applied to the study whether the equation function’s variable belongs to non-unit root or non-stationary. Based on Kunst, Nell and Zimmermann (2011), 5 types of panel unit root tests have been introduced in their research paper including of Levin-Lin-Chu test (LLC),G52  Im, Pesaran and Shin Test (IPS), Breitung’s test, Fisher- type ADF and PP test, and Hadri test. These tests are categorized into two groups. The first group consists of LLC, IPS, Fisher-ADF test and Breitung test. These tests are applied for the estimation of the regression with lagged difference terms. On the other hand, the other group has Hadri test, LLC, and Fisher- PP test which is performed by the researcher for the regression estimation that implicates the kernel weighting.G53 G54 G55 G56 

Panel unit root tests own greater power than time series unit root test does. With the inclusion of heterogeneous cross-section data leads to the enhancement of time series information and distribution normality of limit statistical. It is necessary to ensure that the variables included in the function equation are stationary, thus could carry on to panel cointegration test as it could help to preclude spurious regression issue. In the case that when two variables are striking over the time, high RG57 2 may occur from one regression to another even if the variables have absolutely no relationship with each other. It is provable that the standard assumptions for asymptotic analysis will be invalid if the variables inside the regression are non-stationary. On the other way around, the ordinary “t-ratios” will not comply with what t-distribution does, thus hypothesis tests for the parameters are unable to perform (RG58 2 > Durbin-Watson test statistic).

The AR(1) could be recapitulated to several examples as shown below:

Yt = ? + ?Yt-1 + ?t

Where,

    When ? >1, Yt is no unit root process (a non-stationary process- an explosive process)

    When ? =1, Yt is a unit root process (a non-stationary process- a random walk process)

    When ?