AO3 Qscore

Autosorting 'Quality' Indicator trained on 11k+ works. Very generous with small fics, rewards engagement over popularity (bookmarks-collections-comments/kudos instead of hits) with a 0-100 score spread. Sort & position toggles included.

您需要先安裝使用者腳本管理器擴展,如 TampermonkeyGreasemonkeyViolentmonkey 之後才能安裝該腳本。

您需要先安裝使用者腳本管理器擴充功能,如 TampermonkeyViolentmonkey 後才能安裝該腳本。

您需要先安裝使用者腳本管理器擴充功能,如 TampermonkeyViolentmonkey 後才能安裝該腳本。

您需要先安裝使用者腳本管理器擴充功能,如 TampermonkeyUserscripts 後才能安裝該腳本。

你需要先安裝一款使用者腳本管理器擴展,比如 Tampermonkey,才能安裝此腳本

您需要先安裝使用者腳本管理器擴充功能後才能安裝該腳本。

(我已經安裝了使用者腳本管理器,讓我安裝!)

你需要先安裝一款使用者樣式管理器擴展,比如 Stylus,才能安裝此樣式

你需要先安裝一款使用者樣式管理器擴展,比如 Stylus,才能安裝此樣式

你需要先安裝一款使用者樣式管理器擴展,比如 Stylus,才能安裝此樣式

你需要先安裝一款使用者樣式管理器擴展後才能安裝此樣式

你需要先安裝一款使用者樣式管理器擴展後才能安裝此樣式

你需要先安裝一款使用者樣式管理器擴展後才能安裝此樣式

(我已經安裝了使用者樣式管理器,讓我安裝!)

作者
C89sd
今日安裝
1
安裝總數
11
評價
2 0 0
版本
2.31
建立日期
2025-05-06
更新日期
2025-11-13
尺寸
23.3 KB
授權條款
未知
腳本執行於


Improves on the classic (kudos,hits) metric. This score combines 3 metric pairs correlated with some of my favorite fics (check graphs at the bottom): (bookmarks,kudos), (collections,kudos) and (comments,kudos).
Metrics can be disabled individually, for example, set THRESHOLDS { 'comments': bmin=Infinity, min=Infinity } to disable the contribution of the less accurate (comments,kudos) metric.
Setting the 'kudos' thresholds will affect all 3 metric pairs since they share it, you can for example raise its dim_below to hide more small fics.
(All default thresholds are tuned at the lowest possible to filter noise from data, lowering them may un-dim inaccurate scores.)
(If editing the code, make sure to disable auto-updates to preserve your changes.)
(You can observe the 3 scores by setting const DEBUG = true;.)

Autosort: Navbar toggle ⇊|⇅ (sorted|default).
Indicator position: Navbar toggle ⇱|⇲ (top|bottom).
The now dodges the year to stands out, and to stays visible on fics collapsed by KH or KHX .
Dimming: Applied to low-confidence scores; dimmed fics are sorted at the end.
Toggles do not need reloading. Colors have automatic dark mode.


v2.26: - Improved scoring for extreme values (Distributions are centered at 0 in case someone lowers bmin. Extreme highs are denoised and future proofed using parallel lines past the cutoff.)


v2.23: - Added separate ALPHA for dimmed works, and updated defaults.
v2.22:
- Enhanced DEBUG indicators.
- I realised that (comments,kudos) has two different distributions if chapters==1 or chapters>=2, probably because people interact with oneshots differently. Adding a min threshold to disable (comments,kudos) if chapters<2 fixed its distribution. I have not tried to model this metric separately for chapters==1.
- I rewrote the algorithm to use stronger minimums, with a dimmed fallback to the old ones.


v2.21:
- Added tunable z_scale to produce fewer scores from one metric should you prefer one.
- Option to dim/fail if fewer than N metrics pass, should make alpha<1 more interesting for those who use it. Requiring multiple metrics gives more values to mix making the Average more meaningful (Average did nothing when 1 metric passed. Now, it will reward scores with all-N high metrics).
v2.20:
- Looking closely at the graphs, Kudos should have been trained on min=8, this cleaned up the distributions.
- Default is back to Max, the rounding off affected large works only. Small works get more 100s because 1/3 metric is activated.
Tested: tCDF makes fewer 100s, spreads middle 1pt both sides, but 6x expensive; tCDF_norm does opposite; sticking with CDF.


v2.19:
- Updated the formula and fixed the defaults.
Now using a Normal regression per-metric, and α*Max+(1-α)*Average for the final score.

Below: Max, Average, and Final blended score, with α=0.95 and α=0.68.

Demonstrates how lowering α mixes more Average and shifts the scores left (including the red dots -- my favorite fics). Therefore, Max is the better metric, but i set the default at α=0.95 to round it a bit and have fewer 100 score.


v2.13:
- Option to take the AVERAGE of all metrics instead of their MAX in the code. (the scale becomes a blending weight instead of a cap in that case). Gives a nicer spread while still blending.
v2.12:
- The (comments/kudos) metric is back among (bookmarks/kudos) and (collections/kudos).
- Uniformization is disabled by default atop the code: it was illogical considering the use of max() to surface the best metric. This means you will see more high scores.
- Can disable a metric, scale it down, or raise its min contribution floor in the code.


v2.3: Switched to GAM score {Bk,Col/Ku} instead of Polynomial score {Bk,Col,Com/Ku}. (can install v2.7 to compare both with the Q/P toggle)


v2.0:
• 2nd-degree polynomial quantile regressions (P10, P50, and P90) of the 3 selected metrics pairs:
For each pair, an individual score is computed by:
    ◦ Deriving a skewed normal distribution from the polynomial contours.
    ◦ Z-score calculation based on its deviation from the P50 center-line.
    ◦ Converted into a 0-100 inverted percentile rank via NCDF.

• Final score == Max of all 3 reliable scores, gated by minimum stat counts.
The score is mapped to a perfect 0-100 percentile rank via ECDF normalization for uniform scoring.

• Log() Engagement metrics (bookmarks/collections/comments/kudos) selected due to high correlation with my top favorite fics. By comparison the classic (x/hits) metrics feel completely random:

Update: I also tested (kudos,hits/chapters) and it is strongly bimodal (1 chapter, 2+ chapters) and just as random as (kudos,hits). In fact I suspect that all metric pairs are bimodal and that 1-chapter works should ideally be modeled separately.