眼睛像什么| 六月不搬家是什么意思| 宫腔镜检查后需要注意什么| 推杯换盏什么意思| 来姨妈为什么是黑色的血| 令坦是对方什么人的尊称| 男人腰疼是什么原因| 血压低有什么办法| 反流性食管炎能吃什么水果| 2006属狗的五行缺什么| 女生下面长什么样| 什么是菱形| 右肾小结石是什么意思| 鸡屁股叫什么| 现在有什么水果| 失眠用什么药好| 和什么细什么的成语| 左派是什么意思| 什么是超度| 头昏脑胀吃什么药| 逍遥丸适合什么人吃| 脾切除对身体有什么影响| 被什么虫子咬了会刺痛| 乔迁对联什么时候贴| 四叶草项链是什么牌子| 梦见头发白了是什么意思| 煤油对人体有什么危害| 脚心疼什么原因| 宜入宅是什么意思| 灵修是什么意思| 苏州机场叫什么名字| 奶酪是什么做的| 南宁有什么特产| 不割包皮有什么影响| 骨折吃什么| 一什么绿毯| 中国移动增值业务费是什么| 手凉是什么原因| 不加一笔是什么字| 1984年是什么命| 左肋骨下方是什么器官| 吃猪脑有什么好处和坏处| 青霉素是什么| 近视手术有什么后遗症| butter是什么意思| 什么牌子的助听器好| 保肝降酶药首选什么药| 7月31号是什么星座| 孩子脾胃虚弱吃什么药| 东莞有什么好玩的地方| 博士生导师是什么级别| 胎儿畸形是什么原因造成的| 做什么好赚钱| 心阴虚吃什么食物| ccu病房是什么意思| 4月15号是什么星座| 小白鼠是什么意思| 鸡皮肤用什么药膏最好| 令羽读什么| 会车什么意思| 人人有的是什么生肖| 什么的走路| 什么的口水| 蚧壳虫用什么药| 结核是什么| abo溶血症是什么| 种牙和假牙有什么区别| 脂肪肝什么东西不能吃| 非萎缩性胃窦炎是什么意思| 吃地瓜叶有什么好处和坏处| 7777什么意思| 代发是什么意思| 牛肉和什么相克| 同型半胱氨酸是什么意思| 肾亏和肾虚有什么区别| 葱白是什么| 射精太快吃什么好| 口甲读什么| 吃什么补脑子| 介石是什么意思| 香叶是什么树的叶子| 乳糖是什么糖| 苏州为什么不建机场| 情人的定义是什么| tst是什么意思| 哪吒是什么意思| daddy什么意思| 仰望是什么意思| 左边偏头痛什么原因| 7月14号是什么节日| 皮肤一碰就破是什么病| 口腔溃疡缺少什么维生素| 紫五行属什么| 纪年是什么意思| 红得什么| 少帅是什么军衔| eb病毒是什么病| 口干口苦吃什么药最好| 团长一般是什么军衔| 拜复乐是什么药| 喝什么水减肥最快| 农历6月28日是什么星座| 汗蒸有什么好处| 延年益寿的益是什么意思| 鸡蛋粘壳是什么原因| 丑指什么生肖| 维u是什么药| pt是什么意思| lpa是什么意思| 成林香是什么意思| 植物神经紊乱的症状吃什么药| 做背有什么好处及作用| 一代明君功千秋是什么生肖| 体寒的女人吃什么能调理好身体| 什么是白内障| 抽血前喝水有什么影响| 贵圈是什么意思| 微不足道的意思是什么| 放鸽子是什么意思| 三个耳读什么| 调节肠道菌群吃什么药| 爱做梦是什么原因应该怎样调理| 不宁腿综合症是什么原因引起的| 金利来皮带属于什么档次| 高风亮节是什么意思| 为什么会手抖| instagram什么意思| 歼31为什么没消息了| 螺旋杆菌有什么症状| 桑叶泡水喝有什么功效| 人中黄是什么| 沙蚕是什么动物| 冰箱里有什么细菌| 多发性硬化是什么病| 属猪的护身佛是什么佛| 什么是g点| 眼皮跳什么预兆| 鹅蛋什么人不能吃| 粉墙用什么| 闻风丧胆指什么动物| 老年人腿脚无力是什么原因| 马华念什么| 肠胃炎吃什么消炎药| 二月十七是什么星座| 孕妇晚餐吃什么比较好| 秒了是什么意思| 什么玉好| 郭敬明为什么叫小四| cosmo是什么意思| 肺部结节有什么症状| 孕妇梦见下雪是什么征兆| 1976年是什么命| 风指什么生肖| 癌前病变是什么意思| 包皮是什么意思| 早上六点半是什么时辰| 陈可以组什么词| 离宅是什么意思| 牙齿最多的动物是什么| 山加乘念什么| 天生一对成伴侣是什么生肖| 喝什么水好啊| 按摩有什么好处和坏处| 落地生根是什么生肖| 嘴唇发黑是什么原因| 水钠潴留什么意思| 肠系膜淋巴结是什么病| 为什么天天晚上做梦| 来事头疼什么原因| 黑茶金花是什么菌| 帕金森病是什么原因引起的| pm是什么| 胃胀吃什么药效果最好| 老是嗝气是什么原因| 优五行属性是什么| 幽门螺杆菌吃什么药好| 音序是什么| 遵命是什么意思| 6月18号什么星座| 自述是什么意思| 北京户口有什么用| 头昏脑涨是什么原因| 运费险是什么意思| 腰肌劳损挂什么科| 胃糜烂是什么原因引起的| 直肠肿物是什么意思| 查肾挂什么科| 胃烧心是什么症状| 节制什么意思| 拉肚子吃什么药好使| 清热解毒是什么意思| 笔芯是什么意思| 3月21号是什么星座| 桃子有什么功效| 小肚子是什么部位| 空腹吃荔枝有什么危害| 心灵鸡汤什么意思| 脸上长扁平疣是什么原因引起的| 心脏疼是什么感觉| 蛋白肉是什么东西做的| 女人要矜持是什么意思| 什么两难| 新加坡用什么货币| 发痧是什么原因造成的| 为什么头痛| 蛋白粉吃了有什么好处| 白话文是什么意思| 咂是什么意思| mom什么意思| 阑尾炎能吃什么| 什么菜炒肉好吃| 奶茶色是什么颜色| 橄榄油的好处和坏处是什么| 浓缩汁是什么意思| 养狗养不活是什么兆头| pinsp呼吸机代表什么| 女的学什么手艺最赚钱| 田螺吃什么食物| 双侧骶髂关节致密性骨炎是什么病| 神态自若是什么意思| 哺乳期可以喝什么饮料| 馥字五行属什么| 喜大普奔是什么意思| fomo是什么意思| 走四方是什么生肖| 游离甲状腺素是什么| 保鲜卡是什么原理纸片| 公务员是做什么的| 身体缺钠会有什么症状| 借鉴是什么意思| 1993年出生属什么生肖| 千张炒什么好吃| 等回声结节是什么意思| 催供香是什么意思| 白质脱髓鞘是什么病| 类风湿性关节炎的症状是什么| 走路腿软没劲是什么原因引起的| 丝瓜不可以和什么一起吃| 脚心发凉是什么原因| nsaid是什么药| 昀是什么意思| 两岁宝宝坐飞机需要什么证件| 甘油三脂是什么意思| 今年农历是什么年| 吐口水有血是什么原因| 双肾囊肿什么意思| 转氨酶高是什么病| dj管是什么| 静是什么意思| 用醋泡脚有什么好处| 与虎谋皮什么意思| 小猫起什么名字好听| 怎么算自己五行缺什么| 争辩的近义词是什么| 敞开心扉是什么意思| tfboys什么意思| 神经衰弱吃什么好| 奉天为什么改名沈阳| 功能性子宫出血是什么原因造成的| 什么样的大山| 体检胸透主要检查什么| 9.29是什么星座| cos是什么意思| 冉冉是什么意思| 百度Jump to content

“耍横”做不成好生意

From Wikipedia, the free encyclopedia
百度 无论是出行还是生活方式,她都一直遵循低碳出行,节能环保生活,如果看见身边的人使用酒店的一次性拖鞋,会主动建议其“自己准备拖鞋出门”。

Robust statistics are statistics that maintain their properties even if the underlying distributional assumptions are incorrect. Robust statistical methods have been developed for many common problems, such as estimating location, scale, and regression parameters. One motivation is to produce statistical methods that are not unduly affected by outliers. Another motivation is to provide methods with good performance when there are small departures from a parametric distribution. For example, robust methods work well for mixtures of two normal distributions with different standard deviations; under this model, non-robust methods like a t-test work poorly.[1][2]

Introduction

[edit]

Robust statistics seek to provide methods that emulate popular statistical methods, but are not unduly affected by outliers or other small departures from model assumptions. In statistics, classical estimation methods rely heavily on assumptions that are often not met in practice. In particular, it is often assumed that the data errors are normally distributed, at least approximately, or that the central limit theorem can be relied on to produce normally distributed estimates. Unfortunately, when there are outliers in the data, classical estimators often have very poor performance, when judged using the breakdown point and the influence function described below.

The practical effect of problems seen in the influence function can be studied empirically by examining the sampling distribution of proposed estimators under a mixture model, where one mixes in a small amount (1–5% is often sufficient) of contamination. For instance, one may use a mixture of 95% a normal distribution, and 5% a normal distribution with the same mean but significantly higher standard deviation (representing outliers).

Robust parametric statistics can proceed in two ways:

  • by designing estimators so that a pre-selected behaviour of the influence function is achieved
  • by replacing estimators that are optimal under the assumption of a normal distribution with estimators that are optimal for, or at least derived for, other distributions; for example, using the t-distribution with low degrees of freedom (high kurtosis) or with a mixture of two or more distributions.

Robust estimates have been studied for the following problems:

Definition

[edit]

There are various definitions of a "robust statistic". Strictly speaking, a robust statistic is resistant to errors in the results, produced by deviations from assumptions[4] (e.g., of normality). This means that if the assumptions are only approximately met, the robust estimator will still have a reasonable efficiency, and reasonably small bias, as well as being asymptotically unbiased, meaning having a bias tending towards 0 as the sample size tends towards infinity.

Usually, the most important case is distributional robustness - robustness to breaking of the assumptions about the underlying distribution of the data.[4] Classical statistical procedures are typically sensitive to "longtailedness" (e.g., when the distribution of the data has longer tails than the assumed normal distribution). This implies that they will be strongly affected by the presence of outliers in the data, and the estimates they produce may be heavily distorted if there are extreme outliers in the data, compared to what they would be if the outliers were not included in the data.

By contrast, more robust estimators that are not so sensitive to distributional distortions such as longtailedness are also resistant to the presence of outliers. Thus, in the context of robust statistics, distributionally robust and outlier-resistant are effectively synonymous.[4] For one perspective on research in robust statistics up to 2000, see Portnoy & He (2000).

Some experts prefer the term resistant statistics for distributional robustness, and reserve 'robustness' for non-distributional robustness, e.g., robustness to violation of assumptions about the probability model or estimator, but this is a minority usage. Plain 'robustness' to mean 'distributional robustness' is common.

When considering how robust an estimator is to the presence of outliers, it is useful to test what happens when an extreme outlier is added to the dataset, and to test what happens when an extreme outlier replaces one of the existing data points, and then to consider the effect of multiple additions or replacements.

Examples

[edit]

The mean is not a robust measure of central tendency. If the dataset is, e.g., the values {2,3,5,6,9}, then if we add another datapoint with value -1000 or +1000 to the data, the resulting mean will be very different from the mean of the original data. Similarly, if we replace one of the values with a datapoint of value -1000 or +1000 then the resulting mean will be very different from the mean of the original data.

The median is a robust measure of central tendency. Taking the same dataset {2,3,5,6,9}, if we add another datapoint with value -1000 or +1000 then the median will change slightly, but it will still be similar to the median of the original data. If we replace one of the values with a data point of value -1000 or +1000 then the resulting median will still be similar to the median of the original data.

Described in terms of breakdown points, the median has a breakdown point of 50%, meaning that half the points must be outliers before the median can be moved outside the range of the non-outliers, while the mean has a breakdown point of 0, as a single large observation can throw it off.

The median absolute deviation and interquartile range are robust measures of statistical dispersion, while the standard deviation and range are not.

Trimmed estimators and Winsorised estimators are general methods to make statistics more robust. L-estimators are a general class of simple statistics, often robust, while M-estimators are a general class of robust statistics, and are now the preferred solution, though they can be quite involved to calculate.

Speed-of-light data

[edit]

Gelman et al. in Bayesian Data Analysis (2004) consider a data set relating to speed-of-light measurements made by Simon Newcomb. The data sets for that book can be found via the Classic data sets page, and the book's website contains more information on the data.

Although the bulk of the data looks to be more or less normally distributed, there are two obvious outliers. These outliers have a large effect on the mean, dragging it towards them, and away from the center of the bulk of the data. Thus, if the mean is intended as a measure of the location of the center of the data, it is, in a sense, biased when outliers are present.

Also, the distribution of the mean is known to be asymptotically normal due to the central limit theorem. However, outliers can make the distribution of the mean non-normal, even for fairly large data sets. Besides this non-normality, the mean is also inefficient in the presence of outliers and less variable measures of location are available.

Estimation of location

[edit]

The plot below shows a density plot of the speed-of-light data, together with a rug plot (panel (a)). Also shown is a normal Q–Q plot (panel (b)). The outliers are visible in these plots.

Panels (c) and (d) of the plot show the bootstrap distribution of the mean (c) and the 10% trimmed mean (d). The trimmed mean is a simple, robust estimator of location that deletes a certain percentage of observations (10% here) from each end of the data, then computes the mean in the usual way. The analysis was performed in R and 10,000 bootstrap samples were used for each of the raw and trimmed means.

The distribution of the mean is clearly much wider than that of the 10% trimmed mean (the plots are on the same scale). Also whereas the distribution of the trimmed mean appears to be close to normal, the distribution of the raw mean is quite skewed to the left. So, in this sample of 66 observations, only 2 outliers cause the central limit theorem to be inapplicable.

Robust statistical methods, of which the trimmed mean is a simple example, seek to outperform classical statistical methods in the presence of outliers, or, more generally, when underlying parametric assumptions are not quite correct.

Whilst the trimmed mean performs well relative to the mean in this example, better robust estimates are available. In fact, the mean, median and trimmed mean are all special cases of M-estimators. Details appear in the sections below.

Estimation of scale

[edit]

The outliers in the speed-of-light data have more than just an adverse effect on the mean; the usual estimate of scale is the standard deviation, and this quantity is even more badly affected by outliers because the squares of the deviations from the mean go into the calculation, so the outliers' effects are exacerbated.

The plots below show the bootstrap distributions of the standard deviation, the median absolute deviation (MAD) and the Rousseeuw–Croux (Qn) estimator of scale.[5] The plots are based on 10,000 bootstrap samples for each estimator, with some Gaussian noise added to the resampled data (smoothed bootstrap). Panel (a) shows the distribution of the standard deviation, (b) of the MAD and (c) of Qn.

The distribution of standard deviation is erratic and wide, a result of the outliers. The MAD is better behaved, and Qn is a little bit more efficient than MAD. This simple example demonstrates that when outliers are present, the standard deviation cannot be recommended as an estimate of scale.

Manual screening for outliers

[edit]

Traditionally, statisticians would manually screen data for outliers, and remove them, usually checking the source of the data to see whether the outliers were erroneously recorded. Indeed, in the speed-of-light example above, it is easy to see and remove the two outliers prior to proceeding with any further analysis. However, in modern times, data sets often consist of large numbers of variables being measured on large numbers of experimental units. Therefore, manual screening for outliers is often impractical.

Outliers can often interact in such a way that they mask each other. As a simple example, consider a small univariate data set containing one modest and one large outlier. The estimated standard deviation will be grossly inflated by the large outlier. The result is that the modest outlier looks relatively normal. As soon as the large outlier is removed, the estimated standard deviation shrinks, and the modest outlier now looks unusual.

This problem of masking gets worse as the complexity of the data increases. For example, in regression problems, diagnostic plots are used to identify outliers. However, it is common that once a few outliers have been removed, others become visible. The problem is even worse in higher dimensions.

Robust methods provide automatic ways of detecting, downweighting (or removing), and flagging outliers, largely removing the need for manual screening. Care must be taken; initial data showing the ozone hole first appearing over Antarctica were rejected as outliers by non-human screening.[6]

Variety of applications

[edit]

Although this article deals with general principles for univariate statistical methods, robust methods also exist for regression problems, generalized linear models, and parameter estimation of various distributions.

Measures of robustness

[edit]

The basic tools used to describe and measure robustness are the breakdown point, the influence function and the sensitivity curve.

Breakdown point

[edit]

Intuitively, the breakdown point of an estimator is the proportion of incorrect observations (e.g. arbitrarily large observations) an estimator can handle before giving an incorrect (e.g., arbitrarily large) result. Usually, the asymptotic (infinite sample) limit is quoted as the breakdown point, although the finite-sample breakdown point may be more useful.[7] For example, given independent random variables and the corresponding realizations , we can use to estimate the mean. Such an estimator has a breakdown point of 0 (or finite-sample breakdown point of ) because we can make arbitrarily large just by changing any of .

The higher the breakdown point of an estimator, the more robust it is. Intuitively, we can understand that a breakdown point cannot exceed 50% because if more than half of the observations are contaminated, it is not possible to distinguish between the underlying distribution and the contaminating distribution Rousseeuw & Leroy (1987). Therefore, the maximum breakdown point is 0.5 and there are estimators which achieve such a breakdown point. For example, the median has a breakdown point of 0.5. The X% trimmed mean has a breakdown point of X%, for the chosen level of X. Huber (1981) and Maronna et al. (2019) contain more details. The level and the power breakdown points of tests are investigated in He, Simpson & Portnoy (1990).

Statistics with high breakdown points are sometimes called resistant statistics.[8]

Example: speed-of-light data

[edit]

In the speed-of-light example, removing the two lowest observations causes the mean to change from 26.2 to 27.75, a change of 1.55. The estimate of scale produced by the Qn method is 6.3. We can divide this by the square root of the sample size to get a robust standard error, and we find this quantity to be 0.78. Thus, the change in the mean resulting from removing two outliers is approximately twice the robust standard error.

The 10% trimmed mean for the speed-of-light data is 27.43. Removing the two lowest observations and recomputing gives 27.67. The trimmed mean is less affected by the outliers and has a higher breakdown point.

If we replace the lowest observation, ?44, by ?1000, the mean becomes 11.73, whereas the 10% trimmed mean is still 27.43. In many areas of applied statistics, it is common for data to be log-transformed to make them near symmetrical. Very small values become large negative when log-transformed, and zeroes become negatively infinite. Therefore, this example is of practical interest.

Empirical influence function

[edit]

The empirical influence function is a measure of the dependence of the estimator on the value of any one of the points in the sample. It is a model-free measure in the sense that it simply relies on calculating the estimator again with a different sample. On the right is Tukey's biweight function, which, as we will later see, is an example of what a "good" (in a sense defined later on) empirical influence function should look like.

In mathematical terms, an influence function is defined as a vector in the space of the estimator, which is in turn defined for a sample which is a subset of the population:

  1. is a probability space,
  2. is a measurable space (state space),
  3. is a parameter space of dimension ,
  4. is a measurable space,

For example,

  1. is any probability space,
  2. ,
  3. ,

The empirical influence function is defined as follows.

Let and are i.i.d. and is a sample from these variables. is an estimator. Let . The empirical influence function at observation is defined by:

What this means is that we are replacing the i-th value in the sample by an arbitrary value and looking at the output of the estimator. Alternatively, the EIF is defined as the effect, scaled by n+1 instead of n, on the estimator of adding the point to the sample.[citation needed]

Influence function and sensitivity curve

[edit]
Influence function when Tukey's biweight function (see section M-estimators below) is used as a loss function. Points with large deviation have no influence (y=0).

Instead of relying solely on the data, we could use the distribution of the random variables. The approach is quite different from that of the previous paragraph. What we are now trying to do is to see what happens to an estimator when we change the distribution of the data slightly: it assumes a distribution, and measures sensitivity to change in this distribution. By contrast, the empirical influence assumes a sample set, and measures sensitivity to change in the samples.[9]

Let be a convex subset of the set of all finite signed measures on . We want to estimate the parameter of a distribution in . Let the functional be the asymptotic value of some estimator sequence . We will suppose that this functional is Fisher consistent, i.e. . This means that at the model , the estimator sequence asymptotically measures the correct quantity.

Let be some distribution in . What happens when the data doesn't follow the model exactly but another, slightly different, "going towards" ?

We're looking at:

,

which is the one-sided Gateaux derivative of at , in the direction of .

Let . is the probability measure which gives mass 1 to . We choose . The influence function is then defined by:

It describes the effect of an infinitesimal contamination at the point on the estimate we are seeking, standardized by the mass of the contamination (the asymptotic bias caused by contamination in the observations). For a robust estimator, we want a bounded influence function, that is, one which does not go to infinity as x becomes arbitrarily large.

The empirical influence function uses the empirical distribution function instead of the distribution function , making use of the drop-in principle.

Desirable properties

[edit]

Properties of an influence function that bestow it with desirable performance are:

  1. Finite rejection point ,
  2. Small gross-error sensitivity ,
  3. Small local-shift sensitivity .

Rejection point

[edit]

Gross-error sensitivity

[edit]

Local-shift sensitivity

[edit]

This value, which looks a lot like a Lipschitz constant, represents the effect of shifting an observation slightly from to a neighbouring point , i.e., add an observation at and remove one at .

M-estimators

[edit]

(The mathematical context of this paragraph is given in the section on empirical influence functions.)

Historically, several approaches to robust estimation were proposed, including R-estimators and L-estimators. However, M-estimators now appear to dominate the field as a result of their generality, their potential for high breakdown points and comparatively high efficiency. See Huber (1981).

M-estimators are not inherently robust. However, they can be designed to achieve favourable properties, including robustness. M-estimator are a generalization of maximum likelihood estimators (MLEs) which is determined by maximizing or, equivalently, minimizing . In 1964, Huber proposed to generalize this to the minimization of , where is some function. MLE are therefore a special case of M-estimators (hence the name: "Maximum likelihood type" estimators).

Minimizing can often be done by differentiating and solving , where (if has a derivative).

Several choices of and have been proposed. The two figures below show four functions and their corresponding functions.

For squared errors, increases at an accelerating rate, whilst for absolute errors, it increases at a constant rate. When Winsorizing is used, a mixture of these two effects is introduced: for small values of x, increases at the squared rate, but once the chosen threshold is reached (1.5 in this example), the rate of increase becomes constant. This Winsorised estimator is also known as the Huber loss function.

Tukey's biweight (also known as bisquare) function behaves in a similar way to the squared error function at first, but for larger errors, the function tapers off.

Properties of M-estimators

[edit]

M-estimators do not necessarily relate to a probability density function. Therefore, off-the-shelf approaches to inference that arise from likelihood theory can not, in general, be used.

It can be shown that M-estimators are asymptotically normally distributed so that as long as their standard errors can be computed, an approximate approach to inference is available.

Since M-estimators are normal only asymptotically, for small sample sizes it might be appropriate to use an alternative approach to inference, such as the bootstrap. However, M-estimates are not necessarily unique (i.e., there might be more than one solution that satisfies the equations). Also, it is possible that any particular bootstrap sample can contain more outliers than the estimator's breakdown point. Therefore, some care is needed when designing bootstrap schemes.

Of course, as we saw with the speed-of-light example, the mean is only normally distributed asymptotically and when outliers are present the approximation can be very poor even for quite large samples. However, classical statistical tests, including those based on the mean, are typically bounded above by the nominal size of the test. The same is not true of M-estimators and the type I error rate can be substantially above the nominal level.

These considerations do not "invalidate" M-estimation in any way. They merely make clear that some care is needed in their use, as is true of any other method of estimation.

Influence function of an M-estimator

[edit]

It can be shown that the influence function of an M-estimator is proportional to ,[10] which means we can derive the properties of such an estimator (such as its rejection point, gross-error sensitivity or local-shift sensitivity) when we know its function.

with the given by:

Choice of ψ and ρ

[edit]

In many practical situations, the choice of the function is not critical to gaining a good robust estimate, and many choices will give similar results that offer great improvements, in terms of efficiency and bias, over classical estimates in the presence of outliers.[11]

Theoretically, functions are to be preferred,[clarification needed] and Tukey's biweight (also known as bisquare) function is a popular choice.[12] recommend the biweight function with efficiency at the normal set to 85%.

Robust parametric approaches

[edit]

M-estimators do not necessarily relate to a density function and so are not fully parametric. Fully parametric approaches to robust modeling and inference, both Bayesian and likelihood approaches, usually deal with heavy-tailed distributions such as Student's t-distribution.

For the t-distribution with degrees of freedom, it can be shown that

For , the t-distribution is equivalent to the Cauchy distribution. The degrees of freedom is sometimes known as the kurtosis parameter. It is the parameter that controls how heavy the tails are. In principle, can be estimated from the data in the same way as any other parameter. In practice, it is common for there to be multiple local maxima when is allowed to vary. As such, it is common to fix at a value around 4 or 6. The figure below displays the -function for 4 different values of .

Example: speed-of-light data

[edit]

For the speed-of-light data, allowing the kurtosis parameter to vary and maximizing the likelihood, we get

Fixing and maximizing the likelihood gives

[edit]

A pivotal quantity is a function of data, whose underlying population distribution is a member of a parametric family, that is not dependent on the values of the parameters. An ancillary statistic is such a function that is also a statistic, meaning that it is computed in terms of the data alone. Such functions are robust to parameters in the sense that they are independent of the values of the parameters, but not robust to the model in the sense that they assume an underlying model (parametric family), and in fact, such functions are often very sensitive to violations of the model assumptions. Thus test statistics, frequently constructed in terms of these to not be sensitive to assumptions about parameters, are still very sensitive to model assumptions.

Replacing outliers and missing values

[edit]

Replacing missing data is called imputation. If there are relatively few missing points, there are some models which can be used to estimate values to complete the series, such as replacing missing values with the mean or median of the data. Simple linear regression can also be used to estimate missing values.[13] In addition, outliers can sometimes be accommodated in the data through the use of trimmed means, other scale estimators apart from standard deviation (e.g., MAD) and Winsorization.[14] In calculations of a trimmed mean, a fixed percentage of data is dropped from each end of an ordered data, thus eliminating the outliers. The mean is then calculated using the remaining data. Winsorizing involves accommodating an outlier by replacing it with the next highest or next smallest value as appropriate.[15]

However, using these types of models to predict missing values or outliers in a long time series is difficult and often unreliable, particularly if the number of values to be in-filled is relatively high in comparison with total record length. The accuracy of the estimate depends on how good and representative the model is and how long the period of missing values extends.[16] When dynamic evolution is assumed in a series, the missing data point problem becomes an exercise in multivariate analysis (rather than the univariate approach of most traditional methods of estimating missing values and outliers). In such cases, a multivariate model will be more representative than a univariate one for predicting missing values. The Kohonen self organising map (KSOM) offers a simple and robust multivariate model for data analysis, thus providing good possibilities to estimate missing values, taking into account their relationship or correlation with other pertinent variables in the data record.[15]

Standard Kalman filters are not robust to outliers. To this end Ting, Theodorou & Schaal (2007) have recently shown that a modification of Masreliez's theorem can deal with outliers.

One common approach to handle outliers in data analysis is to perform outlier detection first, followed by an efficient estimation method (e.g., the least squares). While this approach is often useful, one must keep in mind two challenges. First, an outlier detection method that relies on a non-robust initial fit can suffer from the effect of masking, that is, a group of outliers can mask each other and escape detection.[17] Second, if a high breakdown initial fit is used for outlier detection, the follow-up analysis might inherit some of the inefficiencies of the initial estimator.[18]

Use in machine learning

[edit]

Although influence functions have a long history in statistics, they were not widely used in machine learning due to several challenges. One of the primary obstacles is that traditional influence functions rely on expensive second-order derivative computations and assume model differentiability and convexity. These assumptions are limiting, especially in modern machine learning, where models are often non-differentiable, non-convex, and operate in high-dimensional spaces.

Koh & Liang (2017) addressed these challenges by introducing methods to efficiently approximate influence functions using second-order optimization techniques, such as those developed by Pearlmutter (1994), Martens (2010), and Agarwal, Bullins & Hazan (2017). Their approach remains effective even when the assumptions of differentiability and convexity degrade, enabling influence functions to be used in the context of non-convex deep learning models. They demonstrated that influence functions are a powerful and versatile tool that can be applied to a variety of tasks in machine learning, including:

  • Understanding Model Behavior: Influence functions help identify which training points are most “responsible” for a given prediction, offering insights into how models generalize from training data.
  • Debugging Models: Influence functions can assist in identifying domain mismatches—when the training data distribution does not match the test data distribution—which can cause models with high training accuracy to perform poorly on test data, as shown by Ben-David et al. (2010). By revealing which training examples contribute most to errors, developers can address these mismatches.
  • Dataset Error Detection: Noisy or corrupted labels are common in real-world data, especially when crowdsourced or adversarially attacked. Influence functions allow human experts to prioritize reviewing only the most impactful examples in the training set, facilitating efficient error detection and correction.
  • Adversarial Attacks: Models that rely heavily on a small number of influential training points are vulnerable to adversarial perturbations. These perturbed inputs can significantly alter predictions and pose security risks in machine learning systems where attackers have access to the training data (See adversarial machine learning).

Koh and Liang’s contributions have opened the door for influence functions to be used in various applications across machine learning, from interpretability to security, marking a significant advance in their applicability.

See also

[edit]

Notes

[edit]
  1. ^ Sarkar, Palash (2025-08-07). "On some connections between statistics and cryptology". Journal of Statistical Planning and Inference. 148: 20–37. doi:10.1016/j.jspi.2013.05.008. ISSN 0378-3758.
  2. ^ Huber, Peter J.; Ronchetti, Elvezio M. (2025-08-07). Robust Statistics. Wiley Series in Probability and Statistics (1 ed.). Wiley. doi:10.1002/9780470434697. ISBN 978-0-470-12990-6.
  3. ^ Huber, Peter J.; Ronchetti, Elvezio M. (2025-08-07). Robust Statistics. Wiley Series in Probability and Statistics (1 ed.). Wiley. doi:10.1002/9780470434697. ISBN 978-0-470-12990-6.
  4. ^ a b c Huber (1981), page 1.
  5. ^ Rousseeuw & Croux (1993).
  6. ^ Masters, Jeffrey. "When was the ozone hole discovered". Weather Underground. Archived from the original on 2025-08-07.
  7. ^ Maronna et al. (2019)
  8. ^ Resistant statistics, David B. Stephenson
  9. ^ von Mises (1947).
  10. ^ Huber (1981), page 45
  11. ^ Huber (1981).
  12. ^ Maronna et al. (2019)
  13. ^ MacDonald & Zucchini (1997); Harvey & Fernandes (1989).
  14. ^ McBean & Rovers (1998).
  15. ^ a b Rustum & Adeloye (2007).
  16. ^ Rosen & Lennox (2001).
  17. ^ Rousseeuw & Leroy (1987).
  18. ^ He & Portnoy (1992).

References

[edit]
[edit]
芥蒂是什么意思 芥末油是什么提炼出来的 紫荆花什么时候开 nb什么牌子 芹菜炒什么配菜好吃
吃什么药怀孕最快 属鼠的守护神是什么菩萨 肿物是什么意思 笃笃是什么意思 红日是什么意思
2月11日是什么星座 zara是什么意思 肺阴不足的症状是什么 舌头涩是什么原因 马脸是什么脸型
狐臭什么味道 什么叫女人味 华伦天奴属于什么档次 势均力敌什么意思 10000mah是什么意思
西兰花和什么菜搭配hcv8jop8ns6r.cn b超是检查什么的hcv7jop9ns1r.cn 肠道消炎用什么药最好hcv7jop4ns6r.cn 肩周炎吃什么药weuuu.com 什么是双一流大学hcv9jop5ns1r.cn
珐琅是什么hcv8jop4ns6r.cn 云南有什么好吃的hlguo.com 榴莲吃多了有什么危害hcv7jop9ns7r.cn 胸膜炎挂什么科hcv8jop3ns7r.cn 晴雨伞是什么意思hcv7jop5ns6r.cn
梦见妹妹是什么意思hcv8jop4ns1r.cn 玉字五行属什么hanqikai.com 肌酸激酶是什么意思hcv8jop5ns9r.cn 说什么情深似海我却不敢当gysmod.com 皮疹是什么hcv8jop7ns1r.cn
奠基什么意思sscsqa.com 女人吃牛蛙有什么好处naasee.com 心得安又叫什么名hcv9jop7ns1r.cn 冰心的原名是什么hcv8jop8ns6r.cn 农历六月初三是什么星座hcv7jop6ns5r.cn
百度