Skip to content

Risk Module

Classes:

Name Description
BaseRisk

Abstract base class for computing risks.

MSERisk

A class used to compute risks based on Mean Squared Error (MSE).

PrecisionRisk

A class used to compute risks based on the precision of predictions.

RecallRisk

A class used to compute risks based on the recall of predictions.

AccuracyRisk

A class used to compute risks based on the accuracy of predictions.

CoverageRisk

A class used to compute risks based on the coverage of prediction sets.

FalseDiscoveryRisk

A class used to compute risks based on the false discory rate (or coverage of prediction sets).

AbstentionRisk

A class used to compute risks based on the ratio human/machine predictions.

NonUniqueCandidateRisk

A class used to compute risks of alternative predictions.

BaseRisk

BaseRisk(acceptable_risk)

Bases: ABC

Abstract base class for computing risks.

This class provides methods for computing risks based on predictions made by an estimator or directly from predictions and true values.

Parameters:

Name Type Description Default
acceptable_risk float

The acceptable risk value.

required

Attributes:

Name Type Description
name str

The name of the risk function.

greater_is_better bool

Whether a higher risk value is better.

acceptable_risk float

The acceptable risk value.

Methods:

Name Description
convert_to_performance

Convert risk to performance measure.

compute

Compute the risks based on predictions and true values.

Source code in risk_control/risk.py
50
51
def __init__(self, acceptable_risk: float):
    self.acceptable_risk = acceptable_risk

name instance-attribute

name

greater_is_better instance-attribute

greater_is_better

acceptable_risk instance-attribute

acceptable_risk = acceptable_risk

convert_to_performance

convert_to_performance(x)

Convert risk to performance measure. If the object is a risk, the performance measure is the risk.

Parameters:

Name Type Description Default
x float

The risk value.

required

Returns:

Type Description
float

The performance measure.

Source code in risk_control/risk.py
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
def convert_to_performance(self, x: float) -> float:
    """
    Convert risk to performance measure.
    If the object is a risk, the performance measure is the risk.

    Parameters
    ----------
    x : float
        The risk value.

    Returns
    -------
    float
        The performance measure.
    """
    return x

_compute_from_estimator

_compute_from_estimator(estimator, X, y_true, **kwargs)

Compute the risk based on predictions made by an estimator.

Parameters:

Name Type Description Default
estimator BaseEstimator

The estimator used to make predictions. Need to implement predict method.

required
X ndarray

The input samples.

required
y_true ndarray

The true values.

required
**kwargs dict

Additional keyword arguments (used in compute).

{}

Returns:

Type Description
float

The computed risk.

Source code in risk_control/risk.py
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
def _compute_from_estimator(
    self,
    estimator: BaseEstimator,
    X: np.ndarray,
    y_true: np.ndarray,
    **kwargs: Any,
) -> float:
    """
    Compute the risk based on predictions made by an estimator.

    Parameters
    ----------
    estimator : BaseEstimator
        The estimator used to make predictions.
        Need to implement `predict` method.
    X : np.ndarray
        The input samples.
    y_true : np.ndarray
        The true values.
    **kwargs : dict
        Additional keyword arguments (used in [`compute`][risk.BaseRisk.compute]).

    Returns
    -------
    float
        The computed risk.
    """
    y_pred = estimator.predict(X)
    return self._compute_from_predictions(y_pred, y_true, **kwargs)

_compute_from_predictions

_compute_from_predictions(y_pred, y_true, **kwargs)

Compute the risk based on predictions and true values.

Parameters:

Name Type Description Default
y_pred ndarray

The predicted values.

required
y_true ndarray

The true values.

required
**kwargs dict

Additional keyword arguments (used in compute).

{}

Returns:

Type Description
float

The computed risk.

Source code in risk_control/risk.py
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
def _compute_from_predictions(
    self,
    y_pred: np.ndarray,
    y_true: np.ndarray,
    **kwargs: Any,
) -> float:
    """
    Compute the risk based on predictions and true values.

    Parameters
    ----------
    y_pred : np.ndarray
        The predicted values.
    y_true : np.ndarray
        The true values.
    **kwargs : dict
        Additional keyword arguments (used in [`compute`][risk.BaseRisk.compute]).

    Returns
    -------
    float
        The computed risk.
    """
    return self._compute_mean(y_pred, y_true, **kwargs)

_compute_mean

_compute_mean(y_pred, y_true, **kwargs)

Compute the mean of the computed risks (ignoring NaNs).

Parameters:

Name Type Description Default
y_pred ndarray

The predicted values.

required
y_true ndarray

The true values.

required
**kwargs dict

Additional keyword arguments (used in compute).

{}

Returns:

Type Description
float

The mean of the computed risks.

Source code in risk_control/risk.py
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
def _compute_mean(
    self,
    y_pred: np.ndarray,
    y_true: np.ndarray,
    **kwargs: Any,
) -> float:
    """
    Compute the mean of the computed risks (ignoring NaNs).

    Parameters
    ----------
    y_pred : np.ndarray
        The predicted values.
    y_true : np.ndarray
        The true values.
    **kwargs : dict
        Additional keyword arguments (used in [`compute`][risk.BaseRisk.compute]).

    Returns
    -------
    float
        The mean of the computed risks.
    """
    mean = np.nanmean(self.compute(y_pred, y_true, **kwargs))
    if mean.ndim == 0 and np.isnan(mean):
        mean = (-1 if self.greater_is_better else 1) * np.inf
    return mean

compute abstractmethod

compute(y_pred, y_true, **kwargs)

Compute the risks based on predictions and true values.

This method should be implemented in a subclass.

Parameters:

Name Type Description Default
y_pred ndarray

The predicted values.

required
y_true ndarray

The true values.

required
**kwargs dict

Additional keyword arguments.

{}

Returns:

Type Description
ndarray

The computed risks.

Raises:

Type Description
NotImplementedError

If this method is not implemented in a subclass.

Source code in risk_control/risk.py
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
@abstractmethod
def compute(
    self,
    y_pred: np.ndarray,
    y_true: np.ndarray,
    **kwargs: Any,
) -> np.ndarray:
    """
    Compute the risks based on predictions and true values.

    This method should be implemented in a subclass.

    Parameters
    ----------
    y_pred : np.ndarray
        The predicted values.
    y_true : np.ndarray
        The true values.
    **kwargs : dict
        Additional keyword arguments.

    Returns
    -------
    np.ndarray
        The computed risks.

    Raises
    ------
    NotImplementedError
        If this method is not implemented in a subclass.
    """
    pass

MSERisk

MSERisk(acceptable_risk, *, mse_max=1.0)

Bases: BaseRisk

A class used to compute risks based on Mean Squared Error (MSE).

Parameters:

Name Type Description Default
mse_max float

The maximum value for Mean Squared Error (MSE).

1.0

Attributes:

Name Type Description
mse_max float

The maximum value for Mean Squared Error (MSE).

Methods:

Name Description
convert_to_performance

Convert risk to performance measure.

compute

Computes the risks based on the predicted and true values.

Source code in risk_control/risk.py
205
206
207
208
def __init__(self, acceptable_risk: float, *, mse_max: float = 1.0) -> None:
    super().__init__(acceptable_risk)
    self.mse_max = mse_max
    self.acceptable_risk = self.acceptable_risk / self.mse_max

name class-attribute instance-attribute

name = 'mse'

greater_is_better class-attribute instance-attribute

greater_is_better = False

mse_max instance-attribute

mse_max = mse_max

acceptable_risk instance-attribute

acceptable_risk = acceptable_risk / mse_max

convert_to_performance

convert_to_performance(x)

Convert risk to performance measure. If the object is a risk, the performance measure is the risk.

Parameters:

Name Type Description Default
x float

The risk value.

required

Returns:

Type Description
float

The performance measure.

Source code in risk_control/risk.py
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
def convert_to_performance(self, x: float) -> float:
    """
    Convert risk to performance measure.
    If the object is a risk, the performance measure is the risk.

    Parameters
    ----------
    x : float
        The risk value.

    Returns
    -------
    float
        The performance measure.
    """
    return x * self.mse_max

compute

compute(y_pred, y_true, **kwargs)

Computes the risks based on the predicted and true values.

Parameters:

Name Type Description Default
y_pred ndarray

The predicted values.

required
y_true ndarray

The true values.

required
**kwargs dict

Additional keyword arguments.

{}

Returns:

Type Description
ndarray

The computed risks.

Source code in risk_control/risk.py
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
def compute(
    self,
    y_pred: np.ndarray,
    y_true: np.ndarray,
    **kwargs: Any,
) -> np.ndarray:
    """
    Computes the risks based on the predicted and true values.

    Parameters
    ----------
    y_pred : np.ndarray
        The predicted values.
    y_true : np.ndarray
        The true values.
    **kwargs : dict
        Additional keyword arguments.

    Returns
    -------
    np.ndarray
        The computed risks.
    """
    return np.clip((y_pred - y_true) ** 2 / self.mse_max, 0, 1)

PrecisionRisk

PrecisionRisk(acceptable_risk)

Bases: BaseRisk

A class used to compute risks based on the precision of predictions.

Methods:

Name Description
convert_to_performance

Convert risk to performance measure.

compute

Compute risks based on the precision of predictions.

Attributes:

Name Type Description
name str
greater_is_better bool
Source code in risk_control/risk.py
50
51
def __init__(self, acceptable_risk: float):
    self.acceptable_risk = acceptable_risk

name class-attribute instance-attribute

name = 'precision'

greater_is_better class-attribute instance-attribute

greater_is_better = True

convert_to_performance

convert_to_performance(x)

Convert risk to performance measure. If the object is a risk, the performance measure is the risk.

Parameters:

Name Type Description Default
x float

The risk value.

required

Returns:

Type Description
float

The performance measure.

Source code in risk_control/risk.py
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
def convert_to_performance(self, x: float) -> float:
    """
    Convert risk to performance measure.
    If the object is a risk, the performance measure is the risk.

    Parameters
    ----------
    x : float
        The risk value.

    Returns
    -------
    float
        The performance measure.
    """
    return 1 - x

compute

compute(y_pred, y_true, **kwargs)

Compute risks based on the precision of predictions.

Parameters:

Name Type Description Default
y_pred ndarray

The predicted labels.

required
y_true ndarray

The true labels.

required
**kwargs dict

Additional keyword arguments.

{}

Returns:

Type Description
ndarray

The computed risks.

Source code in risk_control/risk.py
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
def compute(
    self,
    y_pred: np.ndarray,
    y_true: np.ndarray,
    **kwargs: Any,
) -> np.ndarray:
    """
    Compute risks based on the precision of predictions.

    Parameters
    ----------
    y_pred : np.ndarray
        The predicted labels.
    y_true : np.ndarray
        The true labels.
    **kwargs : dict
        Additional keyword arguments.

    Returns
    -------
    np.ndarray
        The computed risks.
    """
    risks = 1.0 - (y_pred == y_true)
    risks[~np.bool(y_pred)] = np.nan
    return risks

RecallRisk

RecallRisk(acceptable_risk)

Bases: BaseRisk

A class used to compute risks based on the recall of predictions.

Methods:

Name Description
convert_to_performance

Convert risk to performance measure.

compute

Compute risks based on the recall of predictions.

Attributes:

Name Type Description
name str
greater_is_better bool
Source code in risk_control/risk.py
50
51
def __init__(self, acceptable_risk: float):
    self.acceptable_risk = acceptable_risk

name class-attribute instance-attribute

name = 'recall'

greater_is_better class-attribute instance-attribute

greater_is_better = True

convert_to_performance

convert_to_performance(x)

Convert risk to performance measure. If the object is a risk, the performance measure is the risk.

Parameters:

Name Type Description Default
x float

The risk value.

required

Returns:

Type Description
float

The performance measure.

Source code in risk_control/risk.py
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
def convert_to_performance(self, x: float) -> float:
    """
    Convert risk to performance measure.
    If the object is a risk, the performance measure is the risk.

    Parameters
    ----------
    x : float
        The risk value.

    Returns
    -------
    float
        The performance measure.
    """
    return 1 - x

compute

compute(y_pred, y_true, **kwargs)

Compute risks based on the recall of predictions.

Parameters:

Name Type Description Default
y_pred ndarray

The predicted labels.

required
y_true ndarray

The true labels.

required
**kwargs dict

Additional keyword arguments.

{}

Returns:

Type Description
ndarray

The computed risks.

Source code in risk_control/risk.py
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
def compute(
    self,
    y_pred: np.ndarray,
    y_true: np.ndarray,
    **kwargs: Any,
) -> np.ndarray:
    """
    Compute risks based on the recall of predictions.

    Parameters
    ----------
    y_pred : np.ndarray
        The predicted labels.
    y_true : np.ndarray
        The true labels.
    **kwargs : dict
        Additional keyword arguments.

    Returns
    -------
    np.ndarray
        The computed risks.
    """
    risks = 1.0 - (y_pred == y_true)
    risks[~np.bool(y_true)] = np.nan
    return risks

AccuracyRisk

AccuracyRisk(acceptable_risk)

Bases: BaseRisk

A class used to compute risks based on the accuracy of predictions.

Instead of CoverageRisk, this class uses the best class prediction to compute the risk. It tests if the best class prediction is equal to the true label.

It is not relevant to use this class if the decision is a prediction set because the best class prediction is not defined.

At this time, no decision class uses scoring decisions, so this class is not used.

  • Could be relevant for [SelectiveClassification][decision.SelectiveClassification].
  • Irrelevant for [MultiSelectiveClassification][decision.MultiSelectiveClassification].

Methods:

Name Description
convert_to_performance

Convert risk to performance measure.

compute

Compute risks based on the accuracy of predictions.

Attributes:

Name Type Description
name str
greater_is_better bool
Source code in risk_control/risk.py
50
51
def __init__(self, acceptable_risk: float):
    self.acceptable_risk = acceptable_risk

name class-attribute instance-attribute

name = 'accuracy'

greater_is_better class-attribute instance-attribute

greater_is_better = True

convert_to_performance

convert_to_performance(x)

Convert risk to performance measure. If the object is a risk, the performance measure is the risk.

Parameters:

Name Type Description Default
x float

The risk value.

required

Returns:

Type Description
float

The performance measure.

Source code in risk_control/risk.py
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
def convert_to_performance(self, x: float) -> float:
    """
    Convert risk to performance measure.
    If the object is a risk, the performance measure is the risk.

    Parameters
    ----------
    x : float
        The risk value.

    Returns
    -------
    float
        The performance measure.
    """
    return 1 - x

compute

compute(y_pred, y_true, **kwargs)

Compute risks based on the accuracy of predictions.

Parameters:

Name Type Description Default
y_pred ndarray

The predicted labels.

required
y_true ndarray

The true labels.

required
**kwargs dict

Additional keyword arguments.

{}

Returns:

Type Description
ndarray

The computed risks.

Source code in risk_control/risk.py
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
def compute(
    self,
    y_pred: np.ndarray,
    y_true: np.ndarray,
    **kwargs: Any,
) -> np.ndarray:
    """
    Compute risks based on the accuracy of predictions.

    Parameters
    ----------
    y_pred : np.ndarray
        The predicted labels.
    y_true : np.ndarray
        The true labels.
    **kwargs : dict
        Additional keyword arguments.

    Returns
    -------
    np.ndarray
        The computed risks.
    """
    if y_pred.ndim == 1:
        risks = 1.0 - (y_pred == y_true)
        risks[np.isnan(y_pred)] = np.nan
        return risks

    indexes_abs = np.any(np.isnan(y_pred), axis=-1)
    indexes_false = np.all(~np.bool(y_pred), axis=-1)
    risks = 1.0 - (np.nanargmax(y_pred, axis=-1) == y_true)
    # risks = np.where(
    #     np.all(np.isnan(y_pred), axis=-1),
    #     np.empty_like(y_true) * (_abs),
    #     1.0 - (np.nanargmax(y_pred, axis=-1) == y_true),
    # )
    risks[indexes_abs] = np.nan  # noqa: E712
    risks[indexes_false] = 1.0
    return risks

CoverageRisk

CoverageRisk(acceptable_risk)

Bases: BaseRisk

A class used to compute risks based on the coverage of prediction sets.

Instead of AccuracyRisk, this class uses the prediction sets to compute the risks. It tests if the true label is in the prediction set.

Relevant for [MultiSelectiveClassification][decision.MultiSelectiveClassification]. Compatible with [SelectiveClassification][decision.SelectiveClassification].

Methods:

Name Description
convert_to_performance

Convert risk to performance measure.

compute

Compute risks based on the coverage of prediction sets.

Attributes:

Name Type Description
name str
greater_is_better bool
Source code in risk_control/risk.py
50
51
def __init__(self, acceptable_risk: float):
    self.acceptable_risk = acceptable_risk

name class-attribute instance-attribute

name = 'coverage'

greater_is_better class-attribute instance-attribute

greater_is_better = True

convert_to_performance

convert_to_performance(x)

Convert risk to performance measure. If the object is a risk, the performance measure is the risk.

Parameters:

Name Type Description Default
x float

The risk value.

required

Returns:

Type Description
float

The performance measure.

Source code in risk_control/risk.py
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
def convert_to_performance(self, x: float) -> float:
    """
    Convert risk to performance measure.
    If the object is a risk, the performance measure is the risk.

    Parameters
    ----------
    x : float
        The risk value.

    Returns
    -------
    float
        The performance measure.
    """
    return 1 - x

compute

compute(y_pred, y_true, **kwargs)

Compute risks based on the coverage of prediction sets.

Parameters:

Name Type Description Default
y_pred ndarray

The predicted labels.

required
y_true ndarray

The true labels.

required
**kwargs dict

Additional keyword arguments.

{}

Returns:

Type Description
ndarray

The computed risks.

Source code in risk_control/risk.py
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def compute(
    self,
    y_pred: np.ndarray,
    y_true: np.ndarray,
    **kwargs: Any,
) -> np.ndarray:
    """
    Compute risks based on the coverage of prediction sets.

    Parameters
    ----------
    y_pred : np.ndarray
        The predicted labels.
    y_true : np.ndarray
        The true labels.
    **kwargs : dict
        Additional keyword arguments.

    Returns
    -------
    np.ndarray
        The computed risks.
    """
    if y_pred.ndim == 1:
        risks = 1.0 - (y_pred == y_true)
        risks[np.isnan(y_pred)] = np.nan
        return risks

    n_samples, _ = y_pred.shape
    indexes_abs = np.any(np.isnan(y_pred), axis=-1)
    risks = 1.0 - (y_pred[np.arange(n_samples), y_true])
    risks[indexes_abs] = np.nan  # noqa: E712
    return risks

FalseDiscoveryRisk

FalseDiscoveryRisk(acceptable_risk)

Bases: BaseRisk

A class used to compute risks based on the false discory rate (or coverage of prediction sets).

TODO: Relevant for [MultiSelectiveClassification][decision.MultiSelectiveClassification]. TODO: Compatible with [SelectiveClassification][decision.SelectiveClassification].

Methods:

Name Description
convert_to_performance

Convert risk to performance measure.

compute

Compute risks based on the FDR of prediction sets.

Attributes:

Name Type Description
name str
greater_is_better bool
Source code in risk_control/risk.py
50
51
def __init__(self, acceptable_risk: float):
    self.acceptable_risk = acceptable_risk

name class-attribute instance-attribute

name = 'FDR'

greater_is_better class-attribute instance-attribute

greater_is_better = False

convert_to_performance

convert_to_performance(x)

Convert risk to performance measure. If the object is a risk, the performance measure is the risk.

Parameters:

Name Type Description Default
x float

The risk value.

required

Returns:

Type Description
float

The performance measure.

Source code in risk_control/risk.py
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
def convert_to_performance(self, x: float) -> float:
    """
    Convert risk to performance measure.
    If the object is a risk, the performance measure is the risk.

    Parameters
    ----------
    x : float
        The risk value.

    Returns
    -------
    float
        The performance measure.
    """
    return x

compute

compute(y_pred, y_true, **kwargs)

Compute risks based on the FDR of prediction sets.

Parameters:

Name Type Description Default
y_pred ndarray

The predicted labels.

required
y_true ndarray

The true labels.

required
**kwargs dict

Additional keyword arguments.

{}

Returns:

Type Description
ndarray

The computed risks.

Source code in risk_control/risk.py
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
def compute(
    self,
    y_pred: np.ndarray,
    y_true: np.ndarray,
    **kwargs: Any,
) -> np.ndarray:
    """
    Compute risks based on the FDR of prediction sets.

    Parameters
    ----------
    y_pred : np.ndarray
        The predicted labels.
    y_true : np.ndarray
        The true labels.
    **kwargs : dict
        Additional keyword arguments.

    Returns
    -------
    np.ndarray
        The computed risks.
    """
    if y_pred.ndim == 1:
        risks = 1.0 - (y_pred == y_true)
        risks[np.isnan(y_pred)] = np.nan
        return risks

    # Multi classification
    # The false discovery rate is computed according to the formula:
    # fdr = 1 - (|y_pred \cap y_true| / |y_pred|)
    # where |.| is the cardinality of the set.
    elif y_true.ndim == 1:
        n_samples, _ = y_pred.shape
        indexes_abs = np.any(np.isnan(y_pred), axis=-1)
        risks = 1.0 - (y_pred[np.arange(n_samples), y_true]) / np.sum(
            y_pred, axis=-1
        )
        risks[indexes_abs] = np.nan  # noqa: E712
    else:
        indexes_abs = np.any(np.isnan(y_pred), axis=-1)
        risks = 1.0 - np.sum(y_pred * y_true, axis=-1) / np.sum(y_pred, axis=-1)
        risks[indexes_abs] = np.nan  # noqa: E712
    return risks

AbstentionRisk

AbstentionRisk(acceptable_risk)

Bases: BaseRisk

A class used to compute risks based on the ratio human/machine predictions.

Methods:

Name Description
convert_to_performance

Convert risk to performance measure.

compute

Compute risks based on the ratio human/machine predictions.

Attributes:

Name Type Description
name str
greater_is_better bool
Source code in risk_control/risk.py
50
51
def __init__(self, acceptable_risk: float):
    self.acceptable_risk = acceptable_risk

name class-attribute instance-attribute

name = 'abstension'

greater_is_better class-attribute instance-attribute

greater_is_better = False

convert_to_performance

convert_to_performance(x)

Convert risk to performance measure. If the object is a risk, the performance measure is the risk.

Parameters:

Name Type Description Default
x float

The risk value.

required

Returns:

Type Description
float

The performance measure.

Source code in risk_control/risk.py
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
def convert_to_performance(self, x: float) -> float:
    """
    Convert risk to performance measure.
    If the object is a risk, the performance measure is the risk.

    Parameters
    ----------
    x : float
        The risk value.

    Returns
    -------
    float
        The performance measure.
    """
    return x

compute

compute(y_pred, y_true, **kwargs)

Compute risks based on the ratio human/machine predictions.

  • Machine predictions are assumed to be not ABSTAIN.
  • Human predictions are assumed to be ABSTAIN.

Parameters:

Name Type Description Default
y_pred ndarray

The predicted labels.

required
y_true ndarray

The true labels.

required
**kwargs dict

Additional keyword arguments.

{}

Returns:

Type Description
ndarray

The computed risks.

Source code in risk_control/risk.py
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
def compute(
    self,
    y_pred: np.ndarray,
    y_true: np.ndarray,
    **kwargs: Any,
) -> np.ndarray:
    """
    Compute risks based on the ratio human/machine predictions.

    - Machine predictions are assumed to be not ABSTAIN.
    - Human predictions are assumed to be ABSTAIN.

    Parameters
    ----------
    y_pred : np.ndarray
        The predicted labels.
    y_true : np.ndarray
        The true labels.
    **kwargs : dict
        Additional keyword arguments.

    Returns
    -------
    np.ndarray
        The computed risks.
    """
    if y_pred.ndim == 1:
        return np.isnan(y_pred)
    else:
        return np.all(np.isnan(y_pred), -1)

NonUniqueCandidateRisk

NonUniqueCandidateRisk(acceptable_risk)

Bases: BaseRisk

A class used to compute risks of alternative predictions.

Methods:

Name Description
convert_to_performance

Convert risk to performance measure.

compute

Compute risks based on the alternative predictions.

Attributes:

Name Type Description
name str
greater_is_better bool
Source code in risk_control/risk.py
50
51
def __init__(self, acceptable_risk: float):
    self.acceptable_risk = acceptable_risk

name class-attribute instance-attribute

name = 'non_unique_candidate_risk'

greater_is_better class-attribute instance-attribute

greater_is_better = False

convert_to_performance

convert_to_performance(x)

Convert risk to performance measure. If the object is a risk, the performance measure is the risk.

Parameters:

Name Type Description Default
x float

The risk value.

required

Returns:

Type Description
float

The performance measure.

Source code in risk_control/risk.py
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
def convert_to_performance(self, x: float) -> float:
    """
    Convert risk to performance measure.
    If the object is a risk, the performance measure is the risk.

    Parameters
    ----------
    x : float
        The risk value.

    Returns
    -------
    float
        The performance measure.
    """
    return x

compute

compute(y_pred, y_true, **kwargs)

Compute risks based on the alternative predictions.

  • If the prediction set is empty, the risk is 1.
  • If the prediction set has only one element, the risk is 0.
  • If the prediction set has more than one element, the risk is 1.

Parameters:

Name Type Description Default
y_pred ndarray

The predicted labels.

required
y_true ndarray

The true labels.

required
**kwargs dict

Additional keyword arguments.

{}

Returns:

Type Description
ndarray

The computed risks.

Source code in risk_control/risk.py
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
def compute(
    self,
    y_pred: np.ndarray,
    y_true: np.ndarray,
    **kwargs: Any,
) -> np.ndarray:
    """
    Compute risks based on the alternative predictions.

    - If the prediction set is empty, the risk is 1.
    - If the prediction set has only one element, the risk is 0.
    - If the prediction set has more than one element, the risk is 1.

    Parameters
    ----------
    y_pred : np.ndarray
        The predicted labels.
    y_true : np.ndarray
        The true labels.
    **kwargs : dict
        Additional keyword arguments.

    Returns
    -------
    np.ndarray
        The computed risks.
    """
    if y_pred.ndim == 1:
        return np.isnan(y_pred)
    else:
        return 1 - (np.sum(~np.isnan(y_pred) * y_pred, axis=-1) == 1)