[译]单层感知器作为线性分类器
By robot-v1.0
本文链接 https://www.kyfws.com/ai/single-layer-perceptron-as-linear-classifier-zh/
版权声明 本博客所有文章除特别声明外,均采用 BY-NC-SA 许可协议。转载请注明出处!
- 6 分钟阅读 - 2610 个词 阅读量 0单层感知器作为线性分类器(译文)
原文地址:https://www.codeproject.com/Articles/125346/Single-Layer-Perceptron-as-Linear-Classifier
原文作者:Kanasz Robert
译文由本站 robot-v1.0 翻译
前言
In this article, I will show you how to use single layer percetron as linear classifier of 2 classes.
在本文中,我将向您展示如何使用单层感知器作为2类的线性分类器.
介绍(Introduction)
Perceptron是最简单的前馈神经网络.它由弗兰克`罗森布拉特(Frank Rosenblatt)设计为线性可分离的两个类别的二分类分类器.这意味着网络可以解决的问题类型必须是线性可分离的.基本感知器包括3层:(Perceptron is the simplest type of feed forward neural network. It was designed by Frank Rosenblatt as dichotomic classifier of two classes which are linearly separable. This means that the type of problems the network can solve must be linearly separable. Basic perceptron consists of 3 layers:)
- 传感器层(Sensor layer)
- 关联层(Associative layer)
- 输出神经元(Output neuron) 有许多输入(x(There are a number of inputs (x)ñ(n))在传感器层中,重量(w() in sensor layer, weights (w)ñ(n))和输出.有时() and an output. Sometimes w)0(0)称为偏差和x(is called bias and x)0(0)=+ 1/-1(在这种情况下为x(= +1/-1 (In this case is x)0(0)=-1).(=-1).)
对于感知器上的每个输入(包括偏置),都有一个相应的权重.为了计算感知器的输出,将每个输入乘以其相应的权重.然后,计算所有输入的加权总和,并通过一个限制器函数馈送,该函数评估感知器的最终输出.(For every input on the perceptron (including bias), there is a corresponding weight. To calculate the output of the perceptron, every input is multiplied by its corresponding weight. Then weighted sum is computed of all inputs and fed through a limiter function that evaluates the final output of the perceptron.)
神经元的输出是通过激活输出神经元而形成的,它是输入的函数:(The output of neuron is formed by activation of the output neuron, which is function of input:)
|(1)((1))|
激活函数F可以是线性的,因此我们具有线性网络,也可以是非线性的.在此示例中,我决定使用阈值(信号)功能:(The activation function F can be linear so that we have a linear network, or nonlinear. In this example, I decided to use threshold (signum) function:)
|(2)((2))|
在这种情况下,网络的输出取决于输入,而为+1或-1.如果总输入(所有输入的加权和)为正,则该模式属于+1类,否则属于-1类.由于这种行为,我们可以将感知器用于分类任务.(Output of network in this case is either +1 or -1 depending on the input. If the total input (weighted sum of all inputs) is positive, then the pattern belongs to class +1, otherwise to class -1. Because of this behavior, we can use perceptron for classification tasks.)
让我们考虑一下我们有一个带有2个输入的感知器,并且我们想将输入模式分为2个类.在这种情况下,类之间的间隔是直线,由等式给出:(Let’s consider we have a perceptron with 2 inputs and we want to separate input patterns into 2 classes. In this case, the separation between the classes is straight line, given by equation:)
|(3)((3))|
当我们设置x(When we set x)0(0)=-1并标记w(=-1 and mark w)0(0)=?,那么我们可以将等式(3)重写为形式:(=?, then we can rewrite equation (3) into form:)
|(4)((4))|
在这里,我将介绍感知器的学习方法.感知器的学习方法是调整权重的迭代过程.学习样本被呈现给网络.对于每个权重,通过对旧值添加校正来计算新值.阈值以相同的方式更新:(Here I will describe the learning method for perceptron. Learning method of perceptron is an iterative procedure that adjust the weights. A learning sample is presented to the network. For each weight, the new value is computed by adding a correction to the old value. The threshold is updated in the same way:)
|(5)((5))|
哪里(where)**ÿ(y)**是感知器的输出,(is output of perceptron,)**d(d)**是期望的输出和?是学习参数.(is desired output and ? is the learning parameter.)
使用程序(Using the Program)
运行程序时,您会看到可以输入样本的区域.单击此区域上的左键,您将添加头等样品(蓝色叉号).单击此区域上的右键,您将添加一流的样本(红叉).样本已添加到(When you run the program, you see area where you can input samples. Clicking by left button on this area, you will add first class sample (blue cross). Clicking by right button on this area, you will add first class sample (red cross). Samples are added to the) samples
清单.您还可以设置学习率和迭代次数.设置所有这些值后,可以单击"学习"按钮开始学习.(list. You can also set learning rate and number of iterations. When you have set all these values, you can click on Learn button to start learning.)
使用代码(Using the Code)
所有样本均存储在通用列表中(All samples are stored in generic list) samples
仅持有(which holds only) Sample
类对象.(class objects.)
public class Sample
{
double x1;
double x2;
double cls;
public Sample(double x1, double x2, int cls)
{
this.x1 = x1;
this.x2 = x2;
this.cls = cls;
}
public double X1
{
get { return x1; }
set { this.x1 = value; }
}
public double X2
{
get { return x2; }
set { this.x2 = value; }
}
public double Class
{
get { return cls; }
set { this.cls = value; }
}
}
在运行感知器学习之前,重要的是设置学习速率和迭代次数. Perceptron具有一个伟大的特性.如果存在解决方案,感知器总会找到它,但是当不存在解决方案时会发生问题.在这种情况下,感知器将尝试在无限循环中找到解,为避免这种情况,最好设置最大迭代次数.(Before running a learning of perceptron is important to set learning rate and number of iterations. Perceptron has one great property. If solution exists, perceptron always find it but problem occurs, when solution does not exist. In this case, perceptron will try to find the solution in infinity loop and to avoid this, it is better to set maximum number of iterations.)
下一步是为权重分配随机值(w(The next step is to assign random values for weights (w)0(0),w(, w)1个(1)和w(and w)2(2)).().)
Random rnd = new Random();
w0 = rnd.NextDouble();
w1 = rnd.NextDouble();
w2 = rnd.NextDouble();
将随机值分配给权重后,我们可以遍历样本并计算每个样本的输出,并将其与所需的输出进行比较.(When random values are assigned to weights, we can loop through samples and compute output for every sample and compare it with desired output.)
double x1 = samples[i].X1;
double x2 = samples[i].X2;
int y;
if (((w1 * x1) + (w2 * x2) - w0) < 0)
{
y = -1;
}
else
{
y = 1;
}
我决定定(I decided to set) x0=-1
因此,感知器的输出由以下公式给出:y =w(and for this reason, the output of perceptron is given by equation: y=w)1个(1)* w(**w*)1个(*1*)+ w(*+w*)2(*2*)* w(**w*)2(*2*)-w(*-w*)0(*0*).当感知器输出与期望输出不匹配时,我们必须计算新的权重:(*. When perceptron output and desired output doesn’t match, we must compute new weights:*)
if (y != samples[i].Class)
{
error = true;
w0 = w0 + alpha * (samples[i].Class - y) * x0 / 2;
w1 = w1 + alpha * (samples[i].Class - y) * x1 / 2;
w2 = w2 + alpha * (samples[i].Class - y) * x2 / 2;
}
Y是感知器的输出,(Y is output of perceptron and) samples[i].Class
是期望的输出.最后2个步骤(遍历样本并计算新的权重),我们必须在(is desired output. The last 2 steps (looping through samples and computing new weights), we must repeat while the) error
变量是(variable is) <> 0
和当前的迭代次数((and current number of iterations () iterations
) 小于() is less than) maxIterations
.(.)
int i;
int iterations = 0;
bool error = true;
maxIterations = int.Parse(txtIterations.Text);
Random rnd = new Random();
w0 = rnd.NextDouble();
w1 = rnd.NextDouble();
w2 = rnd.NextDouble();
alpha = (double)trackLearningRate.Value / 1000;
while (error && iterations < maxIterations)
{
error = false;
for (i = 0; i <= samples.Count - 1; i++)
{
double x1 = samples[i].X1;
double x2 = samples[i].X2;
int y;
if (((w1 * x1) + (w2 * x2) - w0) < 0)
{
y = -1;
}
else
{
y = 1;
}
if (y != samples[i].Class)
{
error = true;
w0 = w0 + alpha * (samples[i].Class - y) * x0 / 2;
w1 = w1 + alpha * (samples[i].Class - y) * x1 / 2;
w2 = w2 + alpha * (samples[i].Class - y) * x2 / 2;
}
}
objGraphics.Clear(Color.White);
DrawSeparationLine();
iterations++;
}
功能(Function) DrawSeparationLine
绘制2类分隔线.(draws separation line of 2 classes.)
历史(History)
- 2010年11月7日-发布原始版本(07 Nov 2010 - Original version posted)
许可
本文以及所有相关的源代码和文件均已获得The Code Project Open License (CPOL)的许可。
C# All-Topics 新闻 翻译