翻译专业术语表
目录
警告
本文最后更新于 2021-09-23,文中内容可能已过时。
英语 | 简称 | 中文 |
---|---|---|
Accuracy | 精度 | |
Activation Function | 激活函数 | |
Adaptive Boosting | AdaBoost | AdaBoost |
Adaptive Gradient Algorithm | AdaGrad | AdaGrad |
Adaptive Moment Estimation Algorithm | Adam | Adam |
Affinity Matrix | 亲和矩阵 | |
Agent | 智能体 | |
Alpha-Beta Pruning | α-β修剪法 | |
Anomaly Detection | 异常检测 | |
Area Under ROC Curve | AUC | |
Artificial Intelligence | AI | 人工智能 |
Artificial Neural Network | ANN | 人工神经网络 |
Attention Mechanism | 注意力机制 | |
Autoencoder | AE | 自编码器 |
Automatic Differentiation | AD | 自动微分 |
Autoregressive | AR | 自回归 |
Back Propagation | BP | 反向传播 |
Bag of Words | BOW | 词袋 |
Bagging | 装袋 | |
Bandit | 赌博机/老虎机 | |
Baseline | 基准 | |
Batch Gradient Descent | BGD | 批量梯度下降法 |
Batch Normalization | BN | 批量规范化 |
Batch Size | 批量大小 | |
Bayes Classifier | 贝叶斯分类器 | |
Beam Search | 束搜索 | |
Benchmark | 基准 | |
Bi-Directional Long-Short Term Memory | Bi-LSTM | 双向长短期记忆 |
Bias | 偏差/偏置 | |
Bidirectional Recurrent Neural Network | Bi-RNN | 双向循环神经网络 |
Bigram | 二元语法 | |
Binary Sparse Coding | 二值稀疏编码 | |
Boosting Tree | 提升树 | |
Bootstrap Sampling | 自助采样法 | |
Bootstrapping | 自助法/自举法 | |
Bottom-Up | 自下而上 | |
Chebyshev Distance | 切比雪夫距离 | |
Classification And Regression Tree | CART | 分类与回归树 |
Computer Vision | CV | 计算机视觉 |
Conditional Random Field | CRF | 条件随机场 |
Confidence | 置信度 | |
Confusion Matrix | 混淆矩阵 | |
Conjugate Gradient | 共轭梯度 | |
Consistency Convergence | 一致性收敛 | |
Content-Addressable Memory | CAM | 基于内容寻址的存储 |
Context-Specific Independences | 特定上下文独立 | |
Contextual Bandit | 上下文赌博机/上下文老虎机 | |
Contextualized Representation | 基于上下文的表示 | |
Contractive Autoencoder | 收缩自编码器 | |
Contrastive Divergence | 对比散度 | |
Convergence | 收敛 | |
Convex Optimization | 凸优化 | |
Convex Quadratic Programming | 凸二次规划 | |
Convolutional Neural Network | CNN | 卷积神经网络 |
Correlation Coefficient | 相关系数 | |
Cost Function | 代价函数 | |
Covariance | 协方差 | |
Credit Assignment Problem | CAP | 贡献度分配问题 |
Cross Correlation | 互相关 | |
Cross Entropy | 交叉熵 | |
Cross Validation | 交叉验证 | |
Cross-Entropy Loss Function | 交叉熵损失函数 | |
Cumulative Distribution Function | CDF | 累积分布函数 |
Curvature | 曲率 | |
Curve-Fitting | 曲线拟合 | |
Data Mining | 数据挖掘 | |
Decision Tree | 决策树 | |
Deconvolution | 反卷积 | |
Deduction | 演绎 | |
Deep Convolutional Generative Adversarial Network | DCGAN | 深度卷积生成对抗网络 |
Denoising | 去噪 | |
Derivative | 导数 | |
Determinant | 行列式 | |
Diagonal Matrix | 对角矩阵 | |
Dimension Reduction | 降维 | |
Discriminative Model | 判别式模型 | |
Discriminator | 判别器 | |
Distance Measure | 距离度量 | |
Diverge | 发散 | |
Divergence | 散度 | |
Diversity Measure | 多样性度量/差异性度量 | |
Down Sampling | 下采样 | |
Dropout | 暂退法 | |
Dual Problem | 对偶问题 | |
Dynamic Programming | 动态规划 | |
Early Stopping | 早停 | |
Echo State Network | 回声状态网络 | |
Eigendecomposition | 特征分解 | |
Eigenvalue | 特征值 | |
Eigenvalue Decomposition | 特征值分解 | |
Element-Wise Product | 逐元素积 | |
Embedding | 嵌入 | |
Empirical Conditional Entropy | 经验条件熵 | |
End-To-End | 端到端 | |
Ensemble Learning | 集成学习 | |
Entropy | 熵 | |
Episode | 回合 | |
Epoch | 轮 | |
Estimation Of Mathematical Expectation | 数学期望估计 | |
Estimator | 估计/估计量 | |
Euclidean Distance | 欧氏距离 | |
Euclidean Norm | 欧几里得范数 | |
Evaluation Criterion | 评价准则 | |
Evolution | 演化 | |
Expectation | 期望 | |
Expectation Maximization | EM | 期望最大化 |
Exploding Gradient | 梯度爆炸 | |
Exponential Decay | 指数衰减 | |
Extreme Learning Machine | ELM | 超限学习机 |
Factor | 因子 | |
Feature Engineering | 特征工程 | |
Feature Map | 特征图 | |
Feature Selection | 特征选择 | |
Feedforward | 前馈 | |
Few-Shot Learning | 少试学习 | |
Filter | 滤波器 | |
Fine-Tuning | 微调 | |
Forward Propagation | 前向传播/正向传播 | |
Frobenius Norm | Frobenius范数 | |
Full Padding | 全填充 | |
Gated Recurrent Unit | GRU | 门控循环单元 |
Gated RNN | 门控RNN | |
Gaussian Mixture Model | GMM | 高斯混合模型 |
Generalization Ability | 泛化能力 | |
Generalization Error Bound | 泛化误差上界 | |
Generalize | 泛化 | |
Generalized Lagrange Function | 广义拉格朗日函数 | |
Generalized Rayleigh Quotient | 广义瑞利商 | |
Generative Adversarial Network | GAN | 生成对抗网络 |
Generative Model | 生成式模型 | |
Generator | 生成器 | |
Genetic Algorithm | 遗传算法 | |
Gini Index | 基尼指数 | |
Global Markov Property | 全局马尔可夫性 | |
Gradient | 梯度 | |
Gradient Clipping | 梯度截断 | |
Gradient Descent | 梯度下降 | |
Graph Convolutional Network | GCN | 图卷积神经网络 |
Graph Neural Network | GNN | 图神经网络 |
Graphical Model | GM | 图模型 |
Grid Search | 网格搜索 | |
Ground Truth | 真实值 | |
Hidden Markov Model | HMM | 隐马尔可夫模型 |
Hierarchical Clustering | 层次聚类 | |
Hold-Out | 留出法 | |
Hyperparameter | 超参数 | |
Hyperparameter Optimization | 超参数优化 | |
Hypothesis | 假设 | |
Hypothesis Space | 假设空间 | |
Hypothesis Test | 假设检验 | |
Identity Matrix | 单位矩阵 | |
Incremental Learning | 增量学习 | |
Independent and Identically Distributed | I.I.D. | 独立同分布 |
Induction | 归纳 | |
Inductive Bias | 归纳偏好 | |
Inference | 推断 | |
Information Entropy | 信息熵 | |
Information Gain | 信息增益 | |
Information Retrieval | 信息检索 | |
Inner Product | 内积 | |
Internet of Things | IoT | 物联网 |
Inverse Matrix | 逆矩阵 | |
Joint Probability Distribution | 联合概率分布 | |
K-Armed Bandit | k-摇臂老虎机 | |
K-Fold Cross Validation | k折交叉验证 | |
Karush-Kuhn-Tucker Condition | KKT条件 | |
Kernelized Linear Discriminant Analysis | KLDA | 核线性判别分析 |
KL Divergence | KL散度 | |
Knowledge Distillation | 知识蒸馏 | |
Label | 标签/标记 | |
Lagrange Duality | 拉格朗日对偶性 | |
Lagrange Multiplier | 拉格朗日乘子 | |
Latent Semantic Analysis | LSA | 潜在语义分析 |
Latent Variable | 潜变量/隐变量 | |
Layer Normalization | 层规范化 | |
Lazy Learning | 懒惰学习 | |
Leaky ReLU | 泄漏修正线性单元/泄漏整流线性单元 | |
Learning By Analogy | 类比学习 | |
Learning Rate | 学习率 | |
Least Square Method | LSM | 最小二乘法 |
Likelihood | 似然 | |
Linear Dependence | 线性相关 | |
Linear Discriminant Analysis | LDA | 线性判别分析 |
Linear Regression | 线性回归 | |
Local Representation | 局部式表示/局部式表征 | |
Log Likelihood | 对数似然函数 | |
Log-Likelihood | 对数似然 | |
Logistic Regression | LR | 对数几率回归 |
Logit | 对数几率 | |
Long Short Term Memory | LSTM | 长短期记忆 |
Loss Function | 损失函数 | |
Macron-Recall | Macron-R | 宏查全率 |
Manhattan Distance | 曼哈顿距离 | |
Manifold | 流形 | |
Margin | 间隔 | |
Marginal Distribution | 边缘分布 | |
Marginal Independence | 边缘独立性 | |
Marginalization | 边缘化 | |
Markov Chain | 马尔可夫链 | |
Markov Chain Monte Carlo | MCMC | 马尔可夫链蒙特卡罗 |
Markov Decision Process | MDP | 马尔可夫决策过程 |
Markov Random Field | MRF | 马尔可夫随机场 |
Mask | 掩码 | |
Matrix Inversion | 逆矩阵 | |
Max Pooling | 最大汇聚 | |
Maximal Clique | 最大团 | |
Maximum Entropy Model | 最大熵模型 | |
Maximum Likelihood Estimation | MLE | 极大似然估计 |
Maximum Margin | 最大间隔 | |
Mean Filed | 平均场 | |
Mean Pooling | 平均汇聚 | |
Mean Squared Error | 均方误差 | |
Mean-Field | 平均场 | |
Message Passing | 消息传递 | |
Metric Learning | 度量学习 | |
Micro-Recall | Micro-R | 微查全率 |
Minibatch | 小批量 | |
Minimax Game | 极小极大博弈 | |
Minkowski Distance | 闵可夫斯基距离 | |
Momentum Method | 动量法 | |
Monte Carlo Method | 蒙特卡罗方法 | |
Moralization | 道德化 | |
Multi-Class Classification | 多分类 | |
Multi-Head Attention | 多头注意力 | |
Multi-Head Self-Attention | 多头自注意力 | |
Multi-Layer Perceptron | MLP | 多层感知机 |
Multinomial Distribution | 多项分布 | |
Multiple Dimensional Scaling | 多维缩放 | |
Multiple Linear Regression | 多元线性回归 | |
Mutual Information | 互信息 | |
N-Gram | N元 | |
N-Gram Model | N元模型 | |
Naive Bayes | NB | 朴素贝叶斯 |
Natural Language Generation | NLG | 自然语言生成 |
Natural Language Processing | NLP | 自然语言处理 |
Natural Language Understanding | NLU | 自然语言理解 |
Nearest Neighbor | 最近邻 | |
Net Input | 净输入 | |
Newton Method | 牛顿法 | |
No Free Lunch Theorem | NFL | 没有免费午餐定理 |
Norm | 范数 | |
Normal Distribution | 正态分布 | |
Normalization | 规范化 | |
Object Detection | 目标检测 | |
Odds | 几率 | |
Off-Policy | 异策略 | |
On-Policy | 同策略 | |
One-Hot | 独热 | |
Online Learning | 在线学习 | |
Optimizer | 优化器 | |
Ordinal Attribute | 有序属性 | |
Orthogonal | 正交 | |
Orthogonal Matrix | 正交矩阵 | |
Outlier | 异常点 | |
Overfitting | 过拟合 | |
Oversampling | 过采样 | |
Padding | 填充 | |
Parameter Estimation | 参数估计 | |
Parameter Tuning | 调参 | |
Parametric ReLU | PReLU | 参数化修正线性单元/参数化整流线性单元 |
Part-Of-Speech Tagging | 词性标注 | |
Partial Derivative | 偏导数 | |
Partition Function 配分函数 | ||
Perceptron | 感知机 | |
Performance Measure | 性能度量 | |
Perplexity | 困惑度 | |
Pervasive Learning | 普适学习 | |
Policy | 策略 | |
Polynomial Kernel Function | 多项式核函数 | |
Pooling | 汇聚 | |
Pooling Layer | 汇聚层 | |
Positive Definite Matrix | 正定矩阵 | |
Post-Pruning | 后剪枝 | |
Potential Function | 势函数 | |
Power Method | 幂法 | |
Precision | 查准率/准确率 | |
Prepruning | 预剪枝 | |
Principal Component Analysis | PCA | 主成分分析 |
Prior | 先验 | |
Singular Value Decomposition | SVD | 奇异值分解 |
Support Vector Machines | SVM | 支持向量机 |
Validation Set | 验证集 | |
Vanishing Gradient | 梯度消失 | |
Vapnik-Chervonenkis Dimension | VC维 | |
Variance | 方差 | |
Variance Reduction | 方差减小 | |
Variance Scaling | 方差缩放 | |
Variational Autoencoder | VAE | 变分自编码器 |
Vectorization | 向量化 | |
Wasserstein Distance | Wasserstein距离 | |
Wasserstein GAN | WGAN | Wasserstein生成对抗网络 |
Word Embedding | 词嵌入 | |
Word Sense Disambiguation | 词义消歧 | |
Zero-Shot Learning | 零试学习 |