aboutsummaryrefslogtreecommitdiff
blob: 0bc53e1c06ed372e70bc36e7b93c87bfb03bfd23 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE pkgmetadata SYSTEM "http://www.gentoo.org/dtd/metadata.dtd">
<pkgmetadata>
	<maintainer type="person">
		<email>lssndrbarbieri@gmail.com</email>
		<name>Alessandro Barbieri</name>
	</maintainer>
	<longdescription lang="en">
Features

Reasonably fast, without GPU:
With TBB threading and SSE/AVX vectorization.
98.8% accuracy on MNIST in 13 minutes training (@Core i7-3520M).
Portable and header-only:
Runs anywhere as long as you have a compiler which supports C++14.
Just include tiny_dnn.h and write your model in C++. There is nothing to install.
Easy to integrate with real applications:
No output to stdout/stderr.
A constant throughput (simple parallelization model, no garbage collection).
Works without throwing an exception.
Can import caffe's model.
Simply implemented:
A good library for learning neural networks.

Supported networks

layer-types

core
fully connected
dropout
linear operation
zero padding
power
convolution
convolutional
average pooling
max pooling
deconvolutional
average unpooling
max unpooling
normalization
contrast normalization (only forward pass)
batch normalization
split/merge
concat
slice
elementwise-add
activation functions

tanh
asinh
sigmoid
softmax
softplus
softsign
rectified linear(relu)
leaky relu
identity
scaled tanh
exponential linear units(elu)
scaled exponential linear units (selu)
loss functions

cross-entropy
mean squared error
mean absolute error
mean absolute error with epsilon range
optimization algorithms

stochastic gradient descent (with/without L2 normalization)
momentum and Nesterov momentum
adagrad
rmsprop
adam
adamax
	</longdescription>
	<use>
		<flag name="double-precision">Build tiny-dnn with double precision computations</flag>
		<!--<flag name="libdnn">Build tiny-dnn with GreenteaLibDNN library support</flag>-->
		<!--<flag name="nnpack">Build tiny-dnn with NNPACK library support</flag>-->
		<flag name="opencl">Build tiny-dnn with OpenCL library support</flag>
		<flag name="serialization">Build tiny-dnn with Serialization support</flag>
		<flag name="tbb">Build tiny-dnn with TBB library support</flag>
	</use>
	<upstream>
		<remote-id type="github">tiny-dnn/tiny-dnn</remote-id>
	</upstream>
</pkgmetadata>