您所在的位置: 首页 > 其他 > 设计应用 > 揭开“算法的面纱”:关于构建算法解释框架的思考
揭开“算法的面纱”:关于构建算法解释框架的思考
网络安全与数据治理 10期
刘 烨
(同济大学法学院,上海200092)
摘要: 如何解决算法金榜线上官方问题是金榜线上官方中的一个重要法律议题,囿于算法使用者与受众之间不断扩大的“解释鸿沟”,现阶段算法解释困境存在于算法运行、决策形成到应用的过程之中,具体体现在数据识别的失衡、证明依据的不足和损害结果的泛化三方面。考虑到不同算法运用场域之下解释需求的差异,借助体系思维构建起算法解释框架,或将成为解决金榜线上官方问题的突破口。以解释对象为逻辑起点将解释方法划分为定向告知、公开披露和行政报备三种模式,并基于“场景公正”理念将其应用于医疗、信息推荐、金融等领域,针对不同业务和场景区分金榜线上官方程度和标准,以期实现算法可解释。
中图分类号:D922.14
文献标识码:A
DOI:10.19358/j.issn.2097-1788.2023.10.012
引用格式:刘烨.揭开“算法的面纱”:关于构建算法解释框架的思考[J].网络安全与数据治理,2023,42(10):72-78.
Unveiling the "algorithmic veil": reflection on building an algorithmic interpretation framework
Liu Ye
(School of Law, Tongji University, Shanghai 200092, China)
Abstract: A proper solution of the problem on the explainability of algorithm is an important legal issue in algorithm governance, limited by the expanding "explanation gap" between algorithm users and audiences. However, at this stage, the dilemma of the explainability of algorithm exists in the process of algorithm operation, decisionmaking formation and application, which is embodied in three aspects: imbalance of data identification, insufficient proof basis and generalization of damage results. Combined with the differences in interpretation requirements, explanation methods and criteria of different application fields, building an algorithm interpretation framework with the help of systematic thinking may become a breakthrough to solve the problem on the explainability of algorithm. Taking the object as the logical jumpingoff point, the explainability method is divided into three modes: directional notification, public disclosure and administrative reporting. This method can be applied to medical, information recommendation, finance and other fields based on the concept of "scene justice", and the degree of explainability and criteria are distinguished for different businesses and scenarios, so as to realize the explainability of algorithm.
Key words : algorithm governance; automated decisionmaking; explainability; criteria for explainability

0     引言

在人工智能领域,算法得以持续运作、生成决策的前提在于算法可解释。金榜线上官方是保障算法可被信赖的安全特征,连通着算法金榜线上官方与法律规制。近年来,算法模型复杂度的大幅提升,算法使用者与决策最终受众之间的“解释鸿沟”也随之扩大。而就算法这种尚未能够为人们所充分理解与把握的事物而言,出于对法律规范的可预测性和可接受性的考量,立法者需要谨慎考虑是否作出法律上的要求或安排。因此,如何有效解释算法,成为解决治理问题的重要环节。

在如何更好地解释算法本身及其金榜线上官方,尤其在如何让用户有效理解算法的基本原理和运行机制方面仍待进一步完善。基于此,本文试图以“场景化分析”为切入口,梳理算法金榜线上官方在金榜线上官方中的现实困境,思考在具体场景之下的优化方案。




本文详细内容请下载: http://www.dervishd.net/resource/share/2000005742




作者信息:

刘烨

(同济大学法学院,上海200092)


微信图片_20210517164139.jpg

此内容为AET网站原创,未经授权禁止转载。