Faculty Research

Search Publications

Recent Journal Publications by COB Faculty

Search Publications

Filter & Sort Results: 1221
[clear]
Publication Type Publication Type
Discipline Discipline
Year Published Year Published

Sort by

Showing results for: ""
Results:
Academic Journal
Management

“An Assessment of the Magnitude of Effect Sizes: Evidence from 30 Years of Meta-Analysis in Management”

This study compiles information from more than 250 meta-analyses conducted over the past 30 years to assess the magnitude of reported effect sizes in the OB/HR literatures. Our analysis revealed an average uncorrected effect of r = .227 and an average corrected effect of ρ = .278 (SDρ = .140). Based upon the distribution of effect sizes we report, Cohen’s effect size benchmarks are not appropriate for use in OB/HR research as they over-estimate the actual breakpoints between small, medium, and large effects. We also assessed the average statistical power reported in meta-analytic conclusions and found substantial evidence that the majority of primary studies in the management literature are statistically underpowered. Finally, we investigated the impact of the file drawer problem in meta-analyses and our findings indicate that the file drawer problem is not a significant concern for meta-analysts. We conclude by discussing various implications of this study for OB/HR researchers.
Details
Academic Journal
BIS

“An Efficient Heuristic for Solving an Extended Capacitated Concentrator Location Problem”

In this paper, a mathematical model and a solution algorithm are developed for solving an extended capacitated concentrator location problem. Our model extends the conventional formulation by simultaneously addressing the two capacity constraints, total connection ports and maximum data processing rate, on each concentrator to be selected for satisfying the communication demands of the given end-user nodes. Since the problem is NP-complete, an efficient and effective Lagrangian heuristic is developed and tested by solving 100 randomly generated test problems with sizes ranging from 30(nodes)×30(concentrators) to150×30. Altogether 58% of the tested problems are solved optimally with an average solution gap 0.36% from the optimality and average solution times are from a few seconds to one half of a minute.
Details
Academic Journal
Marketing

“An Empirical Study of Strategic Opacity in Crowdsourced Quality Evaluations”

Crowd-voting mechanisms are commonly used to implement scalable evaluations of crowdsourced creative submissions. Unfortunately, the use of crowd-voting also raises the potential for gaming and manipulation. Manipulation is problematic because i) submitters’ motivation depends on their belief that the system is meritocratic, and ii) manipulated feedback may undermine learning, as submitters seek to learn from received evaluations and those of peers. In this work, we consider a design approach to addressing the issue, focusing on the notion of strategic opacity, i.e., purposefully obfuscating evaluation procedures. On the one hand, opacity may reduce the incentive and thus prevalence of vote manipulation, and submitters may instead dedicate that time and effort to improving their submission quantity or quality. On the other hand, because opacity makes it difficult for submitters to discern the returns to legitimate effort, submitters may also reduce their submission effort, or simply exit the market. We explore this tension via a multi-method study employing field experiments at 99designs and a controlled experiment on Amazon Mechanical Turk. We observe consistent results across all experiments: opacity leads to reductions in gaming in these crowdsourcing contests, and significant increases in the allocation of effort toward legitimate versus illegitimate activities, with no discernible influence on contest participation. We discuss boundary conditions and the implications for contest organizers and contest platform operators.
Details
Academic Journal
Finance

“An Examination of the Differential Impact of Regulation FD on Analysts' Forecast Accuracy”

Regulation fair disclosure (FD) requires companies to publicly disseminate information, effectively preventing the selective pre-earnings announcement guidance to analysts common in the past. We investigate the effects of Regulation FD's reducing information disparity across analysts on their forecast accuracy. Proxies for private information, including brokerage size and analyst company-specific experience, lose their explanatory power for analysts' relative accuracy after Regulation FD. Analyst forecast accuracy declines overall, but analysts that are relatively less accurate (more accurate) before Regulation FD improve (deteriorate) after implementation. Our findings are consistent with selective guidance partially explaining variation in the forecasting accuracy of analysts before Regulation FD.
Details