a_id
int64
7.84k
73.8M
a_body
stringlengths
61
33k
a_creation_date
stringlengths
25
32
a_last_activity_date
stringlengths
25
32
a_last_edit_date
stringlengths
25
32
a_tags
float64
q_id
int64
826
73.8M
q_body
stringlengths
61
29.9k
q_creation_date
stringlengths
25
32
q_last_activity_date
stringlengths
25
32
q_last_edit_date
stringlengths
25
32
q_tags
stringlengths
1
103
_arxiv_links
stringlengths
2
6.69k
_n_arxiv_links
int64
0
94
62,949,683
<p>Yes, the second version with acquire/release order for the operations on <code>m_instance</code> is correct and equivalent to the first version. In fact, it is even preferable to the first version, because fences affect all preceding (acquire)/succeeding (release) atomic operations, but you only need synchronization of operations on <code>m_instance</code>. That's why on some architectures explicit fences are slower.</p> <hr /> <p>Why do you need acquire/release in the first place? Because you need a happens-before relation between creation of the singleton and usage of the singleton to avoid data races. Suppose the following:</p> <ul> <li>Thread 1 calls <code>Singleton::getInstance()</code> and initializes it; this involves creating a new object and storing the pointer in <code>m_instance</code></li> <li>Thread 2 calls <code>Singleton::getInstance()</code> and observes the pointer written by Thread 1</li> </ul> <p>Thread 2 will most like <em>dereference</em> the pointer to access the object it points to, and this access is most likely non-atomic. So you have two non-atomic accesses to the object - one during creation and one when using the object. If these are not ordered by a happens-before relation, then this is a data race.</p> <p>So how do we establish a happens-before relation? By storing the pointer with <code>memory_order_release</code>, and reading it with <code>memory_order_acquire</code>. When an acquire-load operation observes the value written by a release-store, the load <em>synchronizes-with</em> the store, thereby establishing a happens-before relation. Further, construction of the object is <em>sequenced-before</em> the store, and the load is sequenced-before dereferenciation (sequenced-before also implies happens-before), and since happens-before is <em>transitive</em>, it follows that construction happens-before dereferenciation.</p> <p>For more details on the C++ memory model I recommend this paper which I have co-authored: <a href="https://arxiv.org/abs/1803.04432" rel="nofollow noreferrer">Memory Models for C/C++ Programmers</a></p>
2020-07-17 08:04:24.323000+00:00
2020-07-17 08:04:24.323000+00:00
null
null
62,947,709
<pre><code>Singleton* Singleton::getInstance() { Singleton* tmp = m_instance.load(std::memory_order_relaxed); std::atomic_thread_fence(std::memory_order_acquire); //&lt;--1 if (tmp == nullptr) { std::lock_guard&lt;std::mutex&gt; lock(m_mutex); tmp = m_instance.load(std::memory_order_relaxed); if (tmp == nullptr) { tmp = new Singleton; assert(tmp != nullptr); std::atomic_thread_fence(std::memory_order_release); //&lt;--2 m_instance.store(tmp, std::memory_order_relaxed); } } return tmp; } </code></pre> <p>here is a common c++ singleton implementation, there are a <code>release fence</code> in <code>2</code>(marked as above), it is easy to understand, it prevents from reordering <code>new Singleton</code>, without this fence, another thread might get an instance without executing construction yet;</p> <p>what confuses me is that the <code>acquire fence</code> in <code>1</code>, <code>release fence</code> promises that Singleton construction has been executed then store to m_instance, here, when we fetch instance, we won't get an instance without executing construction, why do we still need a <code>acquire fence</code> in <code>1</code>?</p> <p>And, can we replace <code>atomic_thread_fence</code> with m_instance operation memroy order, are they same? (show as below)</p> <pre><code>Singleton* Singleton::getInstance() { Singleton* tmp = m_instance.load(std::memory_order_acquire); if (tmp == nullptr) { std::lock_guard&lt;std::mutex&gt; lock(m_mutex); tmp = m_instance.load(std::memory_order_relaxed); if (tmp == nullptr) { tmp = new Singleton; assert(tmp != nullptr); m_instance.store(tmp, std::memory_order_release); } } return tmp; } </code></pre>
2020-07-17 05:35:07.930000+00:00
2020-07-17 08:04:24.323000+00:00
null
c++|memory-barriers
['https://arxiv.org/abs/1803.04432']
1
51,908,050
<p>Imran's answer is correct in that, from a theoretical point of view, the UCB1 strategy typically used in the <em>Selection phase</em> of MCTS should <strong>eventually</strong> be able to handle with the kinds of situations you describe, and that MCTS (assuming we use something like UCB1 for the Selection phase) will <strong>eventually</strong> converge to minimax evaluations.</p> <p>However, "<strong>eventually</strong>" here means "after an infinite number of MCTS iterations". We need an infinite amount of processing time because only the <em>Selection phase</em> of MCTS can adequately handle the types of situations you describe (the <em>Playout phase</em> can't), and the <em>Selection phase</em> is only actually used in a slowly-growing part of the tree around the root node. So, if the situations you describe are "located" relatively close to the root node, then we can expect that strategies like UCB1 can adequately handle them. If they are very deep / far away from the root, so deep that we don't manage to grow the search tree that far in the processing time we have... then MCTS indeed does not tend to handle these situations well.</p> <p>Note that a similar thing can be said for minimax-based approaches; if they don't manage to search deep enough, they can also result in poor evaluations. The story tends to be much more binary in the case of minimax-like algorithms though; either they manage to search sufficiently deep for good evaluations, or they don't. In the case of MCTS, it will always evaluate these types of situations poorly initially, and might gradually improve as the search tree gradually grows.</p> <p>In practice, minimax/alpha-beta/related algorithms were believed to outperform MCTS-based methods for about a full decade in games with many "trap" situations, like the situations you describe. This includes chess-like games. During the same period of time, MCTS was much more promising already in games like Go. Only in <a href="https://arxiv.org/abs/1712.01815" rel="nofollow noreferrer">a recent paper</a> did a combination of MCTS + Deep Reinforcement Learning + ridiculous amounts of hardware beat minimax-based approaches in chess-like games.</p>
2018-08-18 11:17:52.517000+00:00
2018-08-18 11:17:52.517000+00:00
null
null
51,881,397
<p>So I am familiar with more basic tree search algorithms like game search w/ minimax, but I've been trying to learn more about the Monte-Carlo Tree Search algorithm, and was wondering how it deals with 'precise lines.' </p> <p>In the context of chess, where you might be in a position where you have 30 losing moves but 1 winning line, how would the MTCS Algorithm, more specifically the UCB1 function deal with this? The way I understand UCB1 is that it essentially does a sort of average over its child nodes so the UCB1 value of a line of chess where you have 30 losing moves and one winning one should be deceptively low?</p> <p>I'm still learning about MCTS but I've always had this question and was hoping someone could explain how MCTS still converges to minimax even if a UCB1 value might be very low.</p> <p>Any knowledge would be appreciated! Thanks</p>
2018-08-16 16:14:58.533000+00:00
2018-08-18 11:17:52.517000+00:00
null
machine-learning|chess|montecarlo|monte-carlo-tree-search
['https://arxiv.org/abs/1712.01815']
1
53,046,624
<p>Theory suggests that when multiplying the batch size by k, one should multiply the learning rate by sqrt(k) to keep the variance in the gradient expectation constant. See page 5 at <em>A. Krizhevsky. One weird trick for parallelizing convolutional neural networks</em>: <a href="https://arxiv.org/abs/1404.5997" rel="noreferrer">https://arxiv.org/abs/1404.5997</a></p> <p>However, recent experiments with large mini-batches suggest for a simpler linear scaling rule, i.e multiply your learning rate by k when using mini-batch size of kN. See <em>P.Goyal et al.: Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour</em> <a href="https://arxiv.org/abs/1706.02677" rel="noreferrer">https://arxiv.org/abs/1706.02677</a></p> <p>I would say that with using Adam, Adagrad and other adaptive optimizers, learning rate may remain the same if batch size does not change substantially.</p>
2018-10-29 13:33:57.747000+00:00
2018-10-29 13:33:57.747000+00:00
null
null
53,033,556
<p>When I increase/decrease batch size of the mini-batch used in SGD, should I change learning rate? If so, then how?</p> <p>For reference, I was discussing with someone, and it was said that, when batch size is increased, the learning rate should be decreased by some extent. </p> <p>My understanding is when I increase batch size, computed average gradient will be less noisy and so I either keep same learning rate or increase it. </p> <p>Also, if I use an adaptive learning rate optimizer, like Adam or RMSProp, then I guess I can leave learning rate untouched.</p> <p>Please correct me if I am mistaken and give any insight on this.</p>
2018-10-28 16:17:04.157000+00:00
2022-08-04 13:13:54.367000+00:00
2020-05-24 05:00:21.087000+00:00
machine-learning|deep-learning
['https://arxiv.org/abs/1404.5997', 'https://arxiv.org/abs/1706.02677']
2
44,335,834
<p>According to <a href="https://arxiv.org/abs/1705.11035" rel="noreferrer">this</a> paper, there is a class of convex polygons in which the algorithm cited by ShreevatsaR's answer fails. The paper also proposes a O(n log n) divide and conquer algorithm for solving the problem.</p> <p>Apparently, the simpler O(n<sup>2</sup>) algorithm in which you move points B and C for <strong>all</strong> A is still valid.</p>
2017-06-02 19:04:57.460000+00:00
2017-06-02 19:04:57.460000+00:00
null
null
1,621,364
<p>Given a convex polygon, how do I find the 3 points that define a triangle with the greatest area. </p> <p><strong>Related:</strong> Is it true that the circumcircle of that triangle would also define the minimum bounding circle of the polygon? </p>
2009-10-25 16:46:20.427000+00:00
2022-05-15 19:23:44.737000+00:00
null
c|algorithm|geometry
['https://arxiv.org/abs/1705.11035']
1
7,523,388
<p>I've looked into this further, and it turns out that this distribution has been studied at length. The reason it's of interest is because this "broken" algorithm is (or was) used in the RSA chip system.</p> <p>In <a href="http://arxiv.org/abs/math.PR/0404438" rel="nofollow">Shuffling by semi-random transpositions</a>, Elchanan Mossel, Yuval Peres, and Alistair Sinclair study this and a more general class of shuffles. The upshot of that paper appears to be that it takes <code>log(n)</code> broken shuffles to achieve near random distribution.</p> <p>In <em>The bias of three pseudorandom shuffles</em> (<em>Aequationes Mathematicae</em>, 22, 1981, 268-292), Ethan Bolker and David Robbins analyze this shuffle and determine that the total variation distance to uniformity after a single pass is 1, indicating that it is not very random at all. They give asympotic analyses as well.</p> <p>Finally, Laurent Saloff-Coste and Jessica Zuniga found a nice upper bound in their study of inhomogeneous Markov chains. </p>
2011-09-23 01:39:26.023000+00:00
2012-03-14 21:59:03.060000+00:00
2012-03-14 21:59:03.060000+00:00
null
5,131,341
<p>The famous Fisher-Yates shuffle algorithm can be used to randomly permute an array A of length N:</p> <pre><code>For k = 1 to N Pick a random integer j from k to N Swap A[k] and A[j] </code></pre> <p>A common mistake that I've been told over and over again not to make is this:</p> <pre><code>For k = 1 to N Pick a random integer j from 1 to N Swap A[k] and A[j] </code></pre> <p>That is, instead of picking a random integer from k to N, you pick a random integer from 1 to N.</p> <p>What happens if you make this mistake? I know that the resulting permutation isn't uniformly distributed, but I don't know what guarantees there are on what the resulting distribution will be. In particular, does anyone have an expression for the probability distributions over the final positions of the elements?</p>
2011-02-27 03:51:05.537000+00:00
2022-09-24 11:45:57.340000+00:00
2015-08-07 14:41:42.867000+00:00
algorithm|language-agnostic|math|random|shuffle
['http://arxiv.org/abs/math.PR/0404438']
1
53,249,102
<p>The answer very much depends on what exactly you call a Perceptron. Common options are:</p> <ol> <li><p>Complete architecture. Then <strong>no</strong>, simply because it's by definition a different NN.</p></li> <li><p>A model of a single neuron, specifically <code>y = 1 if (w.x + b) &gt; 0 else 0</code>, where <code>x</code> is the input of the neuron, <code>w</code> and <code>b</code> are its trainable parameters and <code>w.b</code> denotes the dot product. Then <strong>yes</strong>, you can force a bunch of these perceptrons to share weights and call it a CNN. You'll find variants of this idea being used in <a href="https://arxiv.org/abs/1602.02830" rel="nofollow noreferrer">binary neural networks</a>.</p></li> <li><p><a href="https://en.wikipedia.org/wiki/Perceptron#Learning_algorithm" rel="nofollow noreferrer">A training algorithm</a>, typically associated with the Perceptron architecture. This would make no sense to the question, because the learning algorithm is in principle orthogonal to the architecture. Though you cannot really use the Perceptron algorithm for anything with hidden layers, which would suggest <strong>no</strong> as the answer in this case.</p></li> <li><p>Loss function associated with the original Perceptron. This notion of Peceptron is orthogonal to the problem at hand, you're loss function with a CNN is given by whatever you try to do with your whole model. You can eventually use it, but it is non-differentiable, so good luck :-)</p></li> </ol> <p>A sidenote rant: You can see people refer to feed-forward, fully-connected NNs with hidden layers as "Multilayer Perceptrons" (MLPs). This is a misnomer, there are no Perceptrons in MLPs, see e.g. this discussion <a href="https://en.wikipedia.org/wiki/Multilayer_perceptron#Terminology" rel="nofollow noreferrer">on Wikipedia</a> -- unless you go explore some really weird ideas. It would make sense call these networks as Multilayer Linear Logistic Regression, because that's what they used to be composed of. Up till like 6 years ago.</p>
2018-11-11 13:15:21.460000+00:00
2018-11-11 13:15:21.460000+00:00
null
null
42,646,964
<p>I was reading <a href="http://neuralnetworksanddeeplearning.com/chap6.html" rel="nofollow noreferrer">this interesting article</a> on convolutional neural networks. It showed this image, explaining that for every receptive field of 5x5 pixels/neurons, a value for a hidden value is calculated.</p> <p><a href="https://i.stack.imgur.com/72Gwn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/72Gwn.png" alt="receptive field to neuron"></a></p> <blockquote> <p>We can think of max-pooling as a way for the network to ask whether a given feature is found anywhere in a region of the image. It then throws away the exact positional information.</p> </blockquote> <p>So max-pooling is applied. </p> <p><a href="https://i.stack.imgur.com/F0ueZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F0ueZ.png" alt="enter image description here"></a></p> <p>With multiple convolutional layers, it looks something like this: </p> <p><a href="https://i.stack.imgur.com/44S3P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/44S3P.png" alt="enter image description here"></a></p> <p>But my question is, this whole architecture could be build with perceptrons, right?</p> <p>For every convolutional layer, one perceptron is needed, with layers:</p> <pre><code>input_size = 5x5; hidden_size = 10; e.g. output_size = 1; </code></pre> <p>Then for every receptive field in the original image, the 5x5 area is inputted into a perceptron to output the value of a neuron in the hidden layer. So basically doing this for every receptive field: <a href="https://i.stack.imgur.com/7fVFO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7fVFO.png" alt="enter image description here"></a></p> <p>So the same perceptron is used 24x24 amount of times to construct the hidden layer, because: </p> <blockquote> <p>is that we're going to use the same weights and bias for each of the 24×24 hidden neurons. </p> </blockquote> <p>And this works for the hidden layer to the pooling layer as well, <code>input_size = 2x2;</code> <code>output_size = 1;</code>. And in the case of a max-pool layer, it's just a <code>max()</code> function on an array.</p> <p>and then finally:</p> <blockquote> <p>The final layer of connections in the network is a fully-connected layer. That is, this layer connects every neuron from the max-pooled layer to every one of the 10 output neurons.</p> </blockquote> <p>which is a perceptron again.</p> <p>So my final architecture looks like this:</p> <pre><code>-&gt; 1 perceptron for every convolutional layer/feature map -&gt; run this perceptron for every receptive field to create feature map -&gt; 1 perceptron for every pooling layer -&gt; run this perceptron for every field in the feature map to create a pooling layer -&gt; finally input the values of the pooling layer in a regular ALL to ALL perceptron </code></pre> <p>Or am I overseeing something? Or is this already how they are programmed?</p>
2017-03-07 11:19:59.863000+00:00
2018-11-11 13:15:21.460000+00:00
2017-03-08 09:19:32.207000+00:00
neural-network|conv-neural-network
['https://arxiv.org/abs/1602.02830', 'https://en.wikipedia.org/wiki/Perceptron#Learning_algorithm', 'https://en.wikipedia.org/wiki/Multilayer_perceptron#Terminology']
3
49,600,843
<p>The book "Nonstationarities in Hydrologic and Environmental Time Series" (Springer Ed.), at pag. 119, provides a good explanation for interpreting those p-values within the Priestley-Subba Rao test.</p> <p>In general, you may also take a look at:</p> <p><a href="https://www.stat.tamu.edu/~suhasini/test_papers/priestley_subbarao70.pdf" rel="nofollow noreferrer">https://www.stat.tamu.edu/~suhasini/test_papers/priestley_subbarao70.pdf</a></p> <p>About other stationarity tests, you may have a look at "weakly.stationary()" function within "analytics" package and to the "costat" package whose info at:</p> <p><a href="https://www.jstatsoft.org/article/view/v055i01" rel="nofollow noreferrer">https://www.jstatsoft.org/article/view/v055i01</a></p> <p>where there is a suggestion to handle non dyadic length (i.e., 2^J for some natural number J) time series. At pag. 5:</p> <p><em>"It should be made clear that this is not a limitation of wavelets per se, but of the computationally efficient algorithms used to compute the intended quantities. Data sets of other lengths can be handled by zero-padding or truncation"</em></p> <p>Some interesting info at:</p> <p><a href="https://arxiv.org/pdf/1603.06415.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1603.06415.pdf</a></p>
2018-04-01 17:39:28.787000+00:00
2018-04-01 18:03:56.943000+00:00
2018-04-01 18:03:56.943000+00:00
null
48,494,996
<p>I need to check second order stationarity of a time series of length 7320 (I have 1800 such time series). These time series are displacement recorded at 1800 sites on a mountain. I tried using Priestley-Subba Rao in R : <code>stationarity()</code>. For 1 time series out of 1800, I got these values:</p> <pre><code>p-value for T : 2.109424e-15 p-value for I+R : 9.447661e-06 p-value for T+I+R : 1.4099e-10 </code></pre> <p>Could you please tell me how to interpret it. All I know is if the p-value for T is 0, the null hypothesis of time series being stationary is rejected. Also, for 2nd time series out of 1800, I got these values;</p> <pre><code>p-value for T : 0 p-value for I+R : 1.458063e-09 p-value for T+I+R : 0 </code></pre> <p>Could you tell me how to differentiate between the two. Both the time series are from the same dataset. Also, is it possible that one time series is stationary and another is not, given the fact they are from the same site and recorded at the exact same time.</p> <p>I also tried Wavelet Spectrum Test in R: <code>hwtos2()</code> function. But this function takes the time-series length that are power of 2. Is there any other better test for looking at stationarity that does not limit with the length of time series?</p>
2018-01-29 05:38:20.553000+00:00
2020-10-05 10:16:21.600000+00:00
2018-01-29 05:40:42.403000+00:00
r|time-series
['https://www.stat.tamu.edu/~suhasini/test_papers/priestley_subbarao70.pdf', 'https://www.jstatsoft.org/article/view/v055i01', 'https://arxiv.org/pdf/1603.06415.pdf']
3
41,491,866
<p>There are several forms of parallelism that TensorFlow provides when training a convolutional neural network (and many other machine learning models), including:</p> <ol> <li><p><strong>Parallelism within individual operations</strong> (such as <a href="https://www.tensorflow.org/api_docs/python/nn/convolution#conv2d" rel="nofollow noreferrer"><code>tf.nn.conv2d()</code></a> and <a href="https://www.tensorflow.org/api_docs/python/math_ops/matrix_math_functions#matmul" rel="nofollow noreferrer"><code>tf.matmul()</code></a>). These operations have efficient parallel implementations for multi-core CPUs and GPUs, and TensorFlow uses these implementations wherever available.</p></li> <li><p><strong>Parallelism between operations</strong>. TensorFlow uses a dataflow graph representation for your model, and where there are two nodes that aren't connected by a directed path in the dataflow graph, these may execute in parallel. For example, the Inception image recognition model has many parallel branches in its dataflow graph (see figure 3 in <a href="https://arxiv.org/pdf/1409.4842.pdf" rel="nofollow noreferrer">this paper</a>), and TensorFlow can exploit this to run many operations at the same time. The <a href="https://arxiv.org/pdf/1404.5997v2.pdf" rel="nofollow noreferrer">AlexNet paper</a> also describes how to use "model parallelism" to run operations in parallel on different parts of the model, and TensorFlow supports that using the same mechanism.</p></li> <li><p><strong>Parallelism between model replicas</strong>. TensorFlow is also designed for <a href="https://www.tensorflow.org/how_tos/distributed/" rel="nofollow noreferrer">distributed execution</a>. One common scheme for parallel training ("data parallelism") involves sharding your dataset across a set of identical workers, performing the same training computation on each of those workers for different data, and sharing the model parameters between the workers.</p></li> </ol> <p>In addition, libraries like TensorFlow and Theano can perform various optimizations when they can work with the whole dataflow graph of your model. For example, they can eliminate common subexpressions, avoid recomputing constant values, and generate more efficient fused code.</p>
2017-01-05 18:10:05.290000+00:00
2017-01-05 18:10:05.290000+00:00
null
null
41,489,238
<p>I recently took a courser by Andrew Ng on Coursera. After that I shifted to Python and used Pandas, Numpy, Sklearn to implement ML algorithms. Now while surfing I came across tensorFLow and found it pretty amazing, and implemented this <a href="https://www.tensorflow.org/tutorials/mnist/beginners/" rel="nofollow noreferrer">example</a> which takes MNIST data as input. But I am unsure why use such as library(TensorFlow)? We are not doing any parallel calculations, since the weights updated in the previous epoch are used in the next one??? I am finding it difficult to find a reason to use such a Library?</p>
2017-01-05 15:52:34.877000+00:00
2017-01-05 19:33:13.660000+00:00
null
machine-learning|tensorflow|conv-neural-network
['https://www.tensorflow.org/api_docs/python/nn/convolution#conv2d', 'https://www.tensorflow.org/api_docs/python/math_ops/matrix_math_functions#matmul', 'https://arxiv.org/pdf/1409.4842.pdf', 'https://arxiv.org/pdf/1404.5997v2.pdf', 'https://www.tensorflow.org/how_tos/distributed/']
5
59,794,121
<p>As others have stated, precision/recall is not directly usable as a loss function. However, better proxy loss functions have been found that help with a whole family of precision/recall related functions (e.g. ROC AUC, precision at fixed recall, etc.)</p> <p>The research paper <a href="https://arxiv.org/pdf/1608.04802.pdf" rel="nofollow noreferrer">Scalable Learning of Non-Decomposable Objectives</a> covers this with a method to sidestep the combinatorial optimization by the use of certain calculated bounds, and some Tensorflow code by the authors is available at the <a href="https://git.dst.etit.tu-chemnitz.de/external/tf-models/-/tree/master/research/global_objectives" rel="nofollow noreferrer">tensorflow/models</a> repository. Additionally, there is a followup question <a href="https://stackoverflow.com/questions/54286334/use-tensorflow-loss-global-objectives-recall-at-precision-loss-with-keras-not">on StackOverflow</a> that has an answer that adapts this into a usable Keras loss function.</p> <p>Special thanks to Francois Chollet and other participants on the <a href="https://github.com/keras-team/keras/issues/1732" rel="nofollow noreferrer">Keras issue thread here</a> that turned up that research paper. You may also find that thread provides other useful insights into the problem at hand.</p>
2020-01-17 19:59:22.540000+00:00
2022-01-15 23:15:49.303000+00:00
2022-01-15 23:15:49.303000+00:00
null
52,041,931
<p>I am developping a segmentation neural network with only two classes, 0 and 1 (0 is the background and 1 the object that I want to find on the image). On each image, there are about 80% of 1 and 20% of 0. As you can see, the dataset is unbalanced and it makes the results wrong. My accuracy is 85% and my loss is low, but that is only because my model is good at finding the background !</p> <p>I would like to base the optimizer on another metric, like precision or recall which is more usefull in this case.</p> <p>Does anyone know how to implement this ?</p>
2018-08-27 14:53:24.780000+00:00
2022-01-15 23:15:49.303000+00:00
2018-08-27 15:01:03.113000+00:00
machine-learning|keras|metrics
['https://arxiv.org/pdf/1608.04802.pdf', 'https://git.dst.etit.tu-chemnitz.de/external/tf-models/-/tree/master/research/global_objectives', 'https://stackoverflow.com/questions/54286334/use-tensorflow-loss-global-objectives-recall-at-precision-loss-with-keras-not', 'https://github.com/keras-team/keras/issues/1732']
4
57,996,112
<p>The paper talks about pre-activation Resnet-101. Pre-activation architecture is where they use BN- Relu- Conv. It has been shown to improve performance of Resnets in the paper Identity Mappings in Deep Residual Networks-<a href="https://arxiv.org/pdf/1603.05027.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1603.05027.pdf</a></p>
2019-09-18 15:19:45.857000+00:00
2019-09-18 15:19:45.857000+00:00
null
null
57,949,921
<p>In the DenseNet paper, it says:</p> <blockquote> <p>each “conv” layer shown in the table corresponds the sequence BN-ReLU-Conv.</p> </blockquote> <p>Why is it that the relu activation comes before the convolution? Thanks in advance!</p>
2019-09-16 03:06:38.697000+00:00
2019-09-19 02:03:07.770000+00:00
2019-09-19 02:03:07.770000+00:00
computer-vision
['https://arxiv.org/pdf/1603.05027.pdf']
1
65,138,882
<p>I'm not sure if you're talking about the design of models with the ability to predict multiple labels or just a problem of implementation.</p> <p>You can just simply sort the scores of the model's output, and take the top-N highest as the predictions. But if you're talking about how to design a model, there're lots of work on it. Check out <a href="https://arxiv.org/abs/2009.14119" rel="nofollow noreferrer">this paper</a> for example.</p>
2020-12-04 06:25:33.403000+00:00
2020-12-04 06:25:33.403000+00:00
null
null
65,138,738
<p>I'm on the newer side of learning about machine learning (keras/tensorflow) and am curious about how one might set up a network for taking in some input (an &quot;image&quot; with x channels/features) and be able to predict more than one value based off of this input. I've seen regression models, but the ones I've seen only predict a single value and this is not really how I want to set the problem up. I set up a test NN with a Dense layer of how many predictions I would like to make, but these predictions seem to only predict one value that is not necessarily what the output should be (I think the NN optomizes itself to some value and converges to this value). Any help would be greatly appreciated. I'm also new to posting here, so if I can do something to post a better posed question please let me know! The closest thing that I've seen to predicting some sort of tensor is RNN but I am not sure if this will achieve this quite yet.</p>
2020-12-04 06:08:35.243000+00:00
2020-12-04 06:25:33.403000+00:00
null
python|tensorflow|machine-learning|keras
['https://arxiv.org/abs/2009.14119']
1
30,439,723
<p>In short, you want to attach to a single host a number of devices all of which have the same IP. There are two issues you need to deal with — the ARP cache and routing.</p> <h1>The ARP cache</h1> <p>The ARP cache maps neighbours' IPs to MAC addresses. Since all of your devices have the same IP, the ARP cache will get confused in your situation, and cause all of the traffic to be sent to the same neighbour.</p> <p>I believe that under Linux the ARP cache is indexed by (IP, interface) pairs. This implies that the ARP cache will not get confused if each device is connected to a different interface (please let us know if that works). On the other hand, if you connect all of your devices to the same switch, the ARP cache will get in the way (unless you play tricks with VLANs).</p> <h1>Routing</h1> <p>In traditional next-hop routing, the routing table is indexed by destination IPs. Since all of your devices have the same IP, traditional next-hop is unable to distinguish them. In <em>source-specific routing</em>, the routing table is indexed by <em>(dest, src)</em> pairs. In other words, a source-specific router can choose the next hop by using both the source and the destination.</p> <p>In order to use source-specific routing, you will need to set up a distinct IP on your host for each of the devices. Your application will then be able to pick the right device by performing <code>bind</code> on the right address.</p> <p>Setting up a source-specific routing table is described in Section 4.1 of the <a href="http://lartc.org/howto/" rel="nofollow noreferrer">LARTC</a>. For more information about source-specific routing, please see <a href="https://datatracker.ietf.org/doc/html/draft-troan-homenet-sadr" rel="nofollow noreferrer">draft-troan-homenet-sadr</a> or <a href="http://arxiv.org/pdf/1403.0445.pdf" rel="nofollow noreferrer">this paper about source-specifig routing</a> (disclaimer — I'm a co-author).</p>
2015-05-25 13:38:04.287000+00:00
2015-05-25 13:38:04.287000+00:00
2021-10-07 07:27:41.270000+00:00
null
30,387,753
<p>I've inherited a swath of code that talks to a device developed in-house. Said device has a network interface that is, generously, rather ad-hoc:</p> <ul> <li>it always sets its IP address to be 172.16.0.50, and assumes it's connected directly to 172.16.0.250 (via a physical cable)</li> <li>it sends a UDP heartbeat to .250:2000, regardless of whether .250 has bound to that port</li> <li>it can send UDP traffic to .250:9001 through .250:9016</li> <li>it exposes a text-based admin interface over TCP at .50:7734</li> <li>it binds as UDP to .50:7734 and accepts any incoming traffic on that port as a timestamp to synchronize itself against</li> </ul> <p>Modifying the device's code is absolutely out of the question, sadly. Source is available, unboxed hardware is available to test against, but deployed boxes are aggressively sealed against the environment, and gaining access to the flash chip it boots from is a day-long process.</p> <p>I'm interested in attaching several of these devices to the same host computer, but my background is in applications, web, and some embedded - not networking. Each device has a dedicated network interface (eg p1p1, p1p2, etc), which I think should save me, but I'm not sure how to set Fedora up to do the necessary impersonation, and I'm not sure how to set up my application code to distinguish between UDP traffic on interface p1p1 - IP 172.16.0.50 - port 9000, from UDP traffic from interface p1p2 - IP 172.16.0.50 - port 9000, or to specify that I want to broadcast a given datagram via UDP at 172.16.0.50:9000 on interface p1p1 vs 172.16.0.50:9000 on interface p1p2.</p> <p>I believe I can pull this off with a sufficiently clever combination of static routing entries and iptables rules for bidirectional port forwarding, but I'd like to ask before spending days on a fundamentally flawed approach. What's the sanest way to make this palatable?</p>
2015-05-22 03:10:13.723000+00:00
2015-05-25 13:38:04.287000+00:00
2015-05-22 13:51:48.140000+00:00
linux|networking|tcp|udp|system-administration
['http://lartc.org/howto/', 'https://datatracker.ietf.org/doc/html/draft-troan-homenet-sadr', 'http://arxiv.org/pdf/1403.0445.pdf']
3
39,583,756
<blockquote> <p>does anybody has thoughts on building NER models for labeling text sequences like addresses or temporal expressions?</p> </blockquote> <p>Yes: <a href="https://arxiv.org/abs/1606.03475" rel="nofollow noreferrer">https://arxiv.org/abs/1606.03475</a> use RNN for NER.</p> <p>Figure 1 gives an overview of the ANN architecture:</p> <p><a href="https://i.stack.imgur.com/o2PFU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/o2PFU.png" alt="enter image description here"></a></p>
2016-09-19 23:59:48.517000+00:00
2016-09-19 23:59:48.517000+00:00
null
null
39,580,743
<p>Folks, does anybody has thoughts on building NER models for labeling text sequences like addresses or temporal expressions?</p> <p>There is a parser for temporal expressions like "last five days" called SUTime: <a href="http://nlp.stanford.edu/software/sutime.shtml" rel="nofollow">http://nlp.stanford.edu/software/sutime.shtml</a>. Unfortunately, it's buggy and built as massive mash of rules. </p> <p>Parsing addresses is even more difficult and error prone. CoreNLP parser fails to parse even simple things like Mountain View, CA.</p> <p>I feel that there should be a way to train RNN to recognize these patterns without maintaining a giant list of rules or a giant lookup table.</p>
2016-09-19 19:33:45.490000+00:00
2016-09-19 23:59:48.517000+00:00
null
nlp|named-entity-recognition|recurrent-neural-network
['https://arxiv.org/abs/1606.03475', 'https://i.stack.imgur.com/o2PFU.png']
2
27,180,312
<p>What you are proposing to do is a massively bad idea, so much so that I'm reluctant to show you how to do it. The reason is that for OLS, assuming the residuals are normally distributed with constant variance, then the parameter estimates follow a multivariate t-distribution and we can calculate confidence limits and p-values in the usual way. </p> <p>However, if we perform NNLS on the same data, the residuals <em>will not be normally ditributed</em>, and the standard techniques for calculating p-values, etc. <em>will produce garbage</em>. There are methods for estimating confidence limits on the parameters of an NNLS fit (see <a href="http://arxiv.org/pdf/1205.0953v2.pdf">this reference</a> for instance), but they are approximate and usually rely on fairly restrictive assumptions about the dataset.</p> <p>On the other hand, it would be nice if some of the more basic functions for an <code>lm</code> object, such as <code>predict(...)</code>, <code>coeff(...)</code>, <code>residuals(...)</code>, etc. also worked for the result of an NNLS fit. So one way to acheive that is use <code>nls(...)</code>: just because a model is linear in the parameters does not mean you cannot use non-linear least squares to find the parameters. <code>nls(...)</code> offers the option to set lower (and upper) limits on the parameters if you use the <code>port</code> algorithm.</p> <pre><code>set.seed(1) # for reproducible example data &lt;- as.data.frame(matrix(runif(1e4, min = -1, max = 1),nc=4)) colnames(data) &lt;-c("y", "x1", "x2", "x3") data$y &lt;- with(data,-10*x1+x2 + rnorm(2500)) A &lt;- as.matrix(data[,c("x1", "x2", "x3")]) b &lt;- data$y test &lt;- nnls(A,b) test # Nonnegative least squares model # x estimates: 0 1.142601 0 # residual sum-of-squares: 88391 # reason terminated: The solution has been computed sucessfully. fit &lt;- nls(y~b.1*x1+b.2*x2+b.3*x3,data,algorithm="port",lower=c(0,0,0)) fit # Nonlinear regression model # model: y ~ b.1 * x1 + b.2 * x2 + b.3 * x3 # data: data # b.1 b.2 b.3 # 0.000 1.143 0.000 # residual sum-of-squares: 88391 </code></pre> <p>As you can see, the result of using <code>nnls(...)</code> and the result of using <code>nls(...)</code> with <code>lower-c(0,0,0)</code> are identical. But <code>nls(...)</code> produces an <code>nls</code> object, which supports (most of) the same methods as an <code>lm</code> object. So you can write <code>precict(fit)</code>, <code>coef(fit)</code>, <code>residuals(fit)</code>, <code>AIC(fit)</code> etc. You can also write <code>summary(fit)</code> and <code>confint(fit)</code> <em>but beware</em>: the values you get are not meaningful!!!</p> <p>To illustrate the point about the residuals, we compare the residuals for an OLS fit to this data, with the residuals for the NNLS fit.</p> <pre><code>par(mfrow=c(1,2),mar=c(3,4,1,1)) qqnorm(residuals(lm(y~.,data)),main="OLS"); qqline(residuals(lm(y~.,data))) qqnorm(residuals(fit),main="NNLS"); qqline(residuals(fit)) </code></pre> <p><img src="https://i.stack.imgur.com/vwQUN.png" alt=""></p> <p>In this dataset, the stochastic part of the variability in <code>y</code> is N(0,1) by design, so the residuals from the OLS fit (Q-Q plot on the left) are normal. But the residuals from the same dataset fitted using NNLS are not remotely normal. This is because the true dependance of <code>y</code> on <code>x1</code> is <code>-10</code>, but the NNLS fit is forcing it to 0. Consequently, the proportion of very large residuals (both positive and negative) is much higher than would be expected from the normal distribution.</p>
2014-11-28 00:18:53.490000+00:00
2014-11-28 04:57:17.993000+00:00
2014-11-28 04:57:17.993000+00:00
null
27,178,607
<p>I was looking for a way to do a linear regression under positive constraints, therefore came across the nnls approach. However I was wondering how I could get the same statistics from the nnls as the one provided by an lm object. More specifically the R-squared, the akaike information criterion, the p-values and confidence intervals.</p> <pre><code>library(arm) library(nnls) data = runif(100*4, min = -1, max = 1) data = matrix(data, ncol = 4) colnames(data) = c("y", "x1", "x2", "x3") data = as.data.frame(data) data$x1 = -data$y A = as.matrix(data[,c("x1", "x2", "x3")]) b = data$y test = nnls(A,b) print(test) </code></pre> <p>Is there a way to reestimate in an lm framework, using offset and fixing the coefficient did not work... Is there a way to obtain these statistics ? Or another way to create an lm object with positivity constraints on the coefficients?</p> <p>Thanks Romain. </p>
2014-11-27 21:02:20.377000+00:00
2019-08-08 08:22:13.540000+00:00
2019-08-08 08:22:13.540000+00:00
r|constraints|linear-regression
['http://arxiv.org/pdf/1205.0953v2.pdf']
1
51,419,004
<p>(Taken from <a href="https://arxiv.org/pdf/1711.07758.pdf" rel="nofollow noreferrer">UNDERSTANDING DEEP LEARNING GENERALIZATION BY MAXIMUM ENTROPY (Zheng et al., 2017)</a>:</p> <p>(Original Maximum Entropy Model) Supposing the dataset has input X and label Y, the task is to find a good prediction of Y using X. The prediction Yˆ needs to maximize the conditional entropy H(Yˆ |X) while preserving the same distribution with data (X, Y ). This is formulated as:</p> <p>min −H(Yˆ |X) (1)</p> <p>s.t. P(X, Y ) = P(X, Yˆ ), \sum(Yˆ) P(Yˆ |X) = 1</p> <p>Berger et al., 1996 solves this with lagrange multipliers ωi as an exponential form:</p> <p>Pω(Yˆ = y|X = x) = 1/Zω(x) exp (\sum(i) ωifi(x, y))</p>
2018-07-19 09:27:57.290000+00:00
2021-03-20 17:54:06.280000+00:00
2021-03-20 17:54:06.280000+00:00
null
37,228,196
<p>Can someone give me a clear and simple definition of Maximum entropy classification? It would be very helpful if someone can provide a clear analogy, as I am struggling to understand.</p>
2016-05-14 15:00:20.297000+00:00
2021-03-20 17:54:06.280000+00:00
2016-05-14 16:41:05.320000+00:00
machine-learning|classification|entropy
['https://arxiv.org/pdf/1711.07758.pdf']
1
66,393,446
<p>It's <em>an</em> approach, but whether it's the best one depends on the problem.</p> <p>Exactly how you do that text preprocessing will matter a lot more than what clustering algorithm you use. The mapping from the text to a vector space determines what it <em>means</em> for two emails to be similar. The clustering algorithm just groups the ones that are most similar. (As an aside, I would think that the email <em>text</em> would be a more useful field to cluster on than the domain.) There are lots of options for mapping arbitrary text onto a single vector. A couple papers to get you started: <a href="https://jmlr.org/papers/volume3/blei03a/blei03a.pdf" rel="nofollow noreferrer">Latent Dirichlet Allocation</a> (the θ vector will be the one you want), <a href="https://arxiv.org/abs/1405.4053" rel="nofollow noreferrer">Paragraph Vectors</a>.</p> <p>K-Means is a reasonable choice if you know how many clusters you want. When deciding what properties you want your clustering algorithm to have, the <a href="https://scikit-learn.org/stable/modules/clustering.html" rel="nofollow noreferrer">scikit-learn page on clustering</a> is a useful resource. It shows datasets with a variety of shapes, along with the clusters that are extracted from each by the various algorithms.</p>
2021-02-26 22:16:17.490000+00:00
2021-02-26 22:16:17.490000+00:00
null
null
66,386,328
<p>I have email data (first_name, last_name, email, username, email_domain), and I want to cluster email based on its text so it could be cluster similar emails together, it could be cluster similar names together. What I am thinking to do is to apply text preprocessing on email_domain and train a KMeans algorithm. am I on the right way? Thank you.</p>
2021-02-26 13:04:20.713000+00:00
2021-02-26 22:16:17.490000+00:00
null
python|machine-learning|cluster-analysis
['https://jmlr.org/papers/volume3/blei03a/blei03a.pdf', 'https://arxiv.org/abs/1405.4053', 'https://scikit-learn.org/stable/modules/clustering.html']
3
61,245,688
<p>Thank you for your question, Johannes.</p> <p>Array and object lookup with dynamic function calls was introduced in very early versions of JSONiq, which started as an extension of XQuery. It is common practice in language design to try to reuse existing machinery in early investigations, before extending the data model and syntax.</p> <p>Since objects and arrays can be seen as "extensional functions" that explicitly list the input-output pairs, (ab)using dynamic function calls for object and array lookup is quite natural. This approach was also taken in XQuery 3.1.</p> <p>Syntactic extensions came later. In the "pure JSONiq" syntax, we opted for <code>.Expr</code> for objects and <code>[]</code> as well as <code>[[Expr]]</code> for arrays (double <code>[[]]</code> to not confuse with predicates). XQuery 3.1 also adds a convenient syntax with <code>?</code> for both objects and arrays.</p> <p>For FLWOR expressions I recommend looking into <a href="http://rumbledb.org" rel="nofollow noreferrer">Rumble</a>, which pretty much does pipelines in that way already. The paper is <a href="https://arxiv.org/abs/1910.11582" rel="nofollow noreferrer">here</a>.</p>
2020-04-16 08:20:02.583000+00:00
2020-04-16 08:20:02.583000+00:00
null
null
61,096,536
<p>I'm currently working on improving JSON querying capabilities with Brackit[1] and [2], which is an XQuery engine with additional arrays and "records". I'm now basically following the same XDM as JSONiq uses, but I'm sadly no XQuery expert. I guess I've more or less taken over the project from Sebastian and especially added temporal enhancements.</p> <p>Brackit uses a dereferencing operator <code>=&gt;</code> for records / objects to get the value for a name.</p> <p>Additionally it uses <code>[[expr()]]</code> for array index lookups, I guess just like the pure JSONiq specification.</p> <p>I'm sure you have good reasons to do the dynamic function calls instead, so I might have to change it. However, I thing that the dereferencing operator might work in all cases, which is in my opinion a nicer syntax.</p> <p>I think this vision is great to have a query compiler for semi-structured data with proven optimizations for use in data stores: <a href="http://wwwlgis.informatik.uni-kl.de/cms/dbis/projects/brackit/mission/" rel="nofollow noreferrer">http://wwwlgis.informatik.uni-kl.de/cms/dbis/projects/brackit/mission/</a></p> <p>One of the decisive features of Brackit might be the pipelining of FLOWR expressions for set-oriented processing.</p> <p>kind regards</p> <p>Johannes</p> <p>[1] <a href="https://github.com/sirixdb/brackit" rel="nofollow noreferrer">https://github.com/sirixdb/brackit</a></p> <p>[2] <a href="http://wwwlgis.informatik.uni-kl.de/cms/fileadmin/publications/2013/Dissertation-Baechle.pdf" rel="nofollow noreferrer">http://wwwlgis.informatik.uni-kl.de/cms/fileadmin/publications/2013/Dissertation-Baechle.pdf</a></p>
2020-04-08 08:33:41.417000+00:00
2020-04-16 08:20:02.583000+00:00
null
json|jsoniq
['http://rumbledb.org', 'https://arxiv.org/abs/1910.11582']
2
40,371,126
<pre><code>#All PDFs | Rename { query Arxiv for the abstract by filename, use the page title + ".pdf"} Get-ChildItem *.pdf | Rename-Item -NewName { $title = (Invoke-WebRequest "https://arxiv.org/abs/$($_.BaseName)").parsedhtml.title $title = $title -replace '[\\/:\*\?"&lt;&gt;\|]', '-' # replace forbidden characters "$title.pdf" # in filenames with - } </code></pre> <p>You might want to put a <code>-whatif</code> on the end first, to see what it would do, in case it ruins all the filenames. Or take a backup copy of the folder.</p> <p>Edit: One of the titles is "Signatures of bifurcation on quantum correlations: Case of quantum kicked top" and the <code>:</code> is not allowed in a filename. Script edited to replace all forbidden characters in Windows filenames with dashes instead.</p>
2016-11-02 01:26:43.227000+00:00
2016-11-02 02:42:50.183000+00:00
2016-11-02 02:42:50.183000+00:00
null
40,370,636
<p>I download some pdf files from <a href="https://arxiv.org/list/quant-ph/1610?skip=0&amp;show=25" rel="nofollow noreferrer">HERE</a><br></p> <p>Pdf files are downloaded not using original filenames but by number strings like</p> <pre><code>1610.00005 1610.00022 </code></pre> <p>Fortunally in this HTTP link page or txt files (if I copy for offline renaming) I have relative <br><br> <code>numeric -&gt; original text filename</code> <br><br>string corrispondence <br> For example when I download this files</p> <pre><code>- A Note on Time Operators in Relativistic Quantum Mechanics - A Stronger Theorem Against Macro-realism - Determining quantum correlations in bipartite systems - from qubit to qutrit and beyond - Pair entanglement in dimerized spin-s chains </code></pre> <p>Files are downloaded with this filenames</p> <pre><code>1610.00005.pdf 1610.00022.pdf 1610.00041.pdf 1610.00056.pdf </code></pre> <p>BUT I want rename into original filesname not in a number string I'd like to set a http link or text file for path</p> <p>I have only this codes <br><br>(<em>powershell</em>)</p> <pre><code>$names = Get-Content c\myfiles Get-ChildItem C:\somedir\*.pdf | Sort -desc | Foreach {$i=0} {Rename-Item $_ ($_.basename + $names[$i++] + $_.extension) -WhatIf} </code></pre> <p>or <em>batch code</em></p> <pre><code>@echo off setlocal EnableDelayedExpansion rem Load the list of authors: set i=0 for /F %%a in (myfiles.txt) do ( set /A i+=1 set "author[!i!]=%%a" ) rem Do the rename: set i=0 for /F %%a in ('dir /b *.pdf') do ( set /A i+=1 for %%i in (!i!) do ren "%%a" "%%~Na!author[%%i]!%%~Xa" ) </code></pre>
2016-11-02 00:26:25.577000+00:00
2016-11-02 02:42:50.183000+00:00
2016-11-02 01:52:00.503000+00:00
powershell|batch-file|batch-rename
[]
0
42,295,667
<p>You could also follow a shallow (authors call it deep though) inverse regression using Gensim and word embeddings for document classification. Ideally, using both the titles and text of the forum posts, you should be able to build a pretty decent classification system. Follow along here in this <a href="https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/deepir.ipynb" rel="nofollow noreferrer">notebook</a> and <a href="https://arxiv.org/pdf/1504.07295v3.pdf" rel="nofollow noreferrer">paper.</a></p>
2017-02-17 10:35:15.463000+00:00
2017-02-17 10:35:15.463000+00:00
null
null
31,054,846
<p>I've got a database of hundreds of thousands of forum posts, and would like to tag them in an unsupervised way.</p> <p>I noticed that StackOverflow's tag system suggests tags as I go. How does this algorithm work?</p> <p>I also found this that implies it is SVM based- is it official? <a href="http://dl.acm.org/citation.cfm?id=2660970&amp;dl=ACM&amp;coll=DL&amp;CFID=522960920&amp;CFTOKEN=15091676" rel="noreferrer">http://dl.acm.org/citation.cfm?id=2660970&amp;dl=ACM&amp;coll=DL&amp;CFID=522960920&amp;CFTOKEN=15091676</a></p>
2015-06-25 15:38:35.097000+00:00
2017-02-17 10:35:15.463000+00:00
null
machine-learning|svm|tagging
['https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/deepir.ipynb', 'https://arxiv.org/pdf/1504.07295v3.pdf']
2
27,729,501
<p>How to gain access to a requester pays bucket on Amazon AWS</p> <p>Create an Amazon AWS account and make sure a working credit card is in the account billing settings.</p> <p>Download s3browser <a href="http://s3browser.com/" rel="nofollow">http://s3browser.com/</a></p> <p>CTRL-SHIFT-A to add your Amazon account credentials</p> <p>Create a new bucket to put files in. Upload a file to test that the folder is functional. Type CTRL-E to add an External Bucket (under "Buckets" in the top menu). </p> <p>Enter arxiv/pdf in the "bucket name" field, then click "Add External Bucket"</p> <p>This will not work on the first connection attempt. After about 30 seconds the program dialog will ask:<br> Would you like to try this bucket as a requester pays bucket? Select Yes The connection will retry, and after a brief refresh, you should see the arxiv/pdf folder populate...</p> <p>Hope this helps others navigate the ridiculously frustrating requester pays bucket gauntlet. Have a happy new year and thanks again for your help guys...</p>
2015-01-01 09:02:19.060000+00:00
2015-01-01 09:02:19.060000+00:00
null
null
27,725,238
<p>I have tried a few programs to try to locate and download the arxiv.org requester pays bucket, but no success.<br> I emailed arxiv.org for help, but no response. How the HECK to do this on Windows 7x64? Does anyone have or know of a step-by-step tutorial on how to access third party requester pays buckets? I am willing to make one and post on here and on Youtube if I can just get started...</p> <p>The details I am trying to follow are on: </p> <p><a href="http://arxiv.org/help/bulk_data_s3" rel="nofollow">http://arxiv.org/help/bulk_data_s3</a></p> <p>I tried various programs (s3browser, bucketexplorer, cloudberry) with no success. It appears the requester pays buckets require more specific details to locate that the arxiv.org website is not providing.</p> <p>Now I have installed amazon cli</p> <p>My credentials are entered and I have confirmed that I can access my account buckets, created a couple of folders, and have sent and received a few files.</p> <p>When I try to locate the arxiv.org requester pays bucket: </p> <pre><code>aws s3 ls s3://arxiv.s3.amazonaws.com/pdf/ </code></pre> <p>I get the response:</p> <p>A client error (NoSuchBucket) occurred when calling the ListObjects operation: The specified bucket does not exist</p> <p>I also tried: </p> <pre><code>aws s3 ls --add-header="x-amz-request-payer:requester" ls s3:/ /arxiv/pdf/arXiv_pdf_manifest.xml </code></pre> <p>I get the responseL</p> <pre><code>Unknown options: --add-header=x-amz-request-payer:requester,s3://arxiv/pdf/arXiv _pdf_manifest.xml </code></pre> <p>I tried: </p> <pre><code>aws get "x-amz-request-payer:requester" arxiv/pdf/arXiv_pdf_1001_001.tar &gt; arXiv_pdf_1001_001.tar usage: aws [options] &lt;command&gt; &lt;subcommand&gt; [parameters] aws: error: argument command: Invalid choice, valid choices are: autoscaling | cloudformation etc etc </code></pre> <p>Also tried:</p> <pre><code>C:\z_amazonAWScli&gt;aws s3 "x-amz-request-payer:requester" ls s3://arxiv/pdf/ usage: aws [options] &lt;command&gt; &lt;subcommand&gt; [parameters] aws: error: argument subcommand: Invalid choice, valid choices are: ls | website etc etc </code></pre> <p>Several other variations, no response. Thanks in advance, miniscule</p> <p>Running aws s3 ls shows me my account folders. Running aws s3 ls s3://arxiv/pdf/arXiv_pdf_manifest.xml gives me A client error (AccessDenied) occurred when calling the ListObjects operation: Access Denied</p>
2014-12-31 18:57:29.447000+00:00
2015-01-01 09:02:19.060000+00:00
2015-01-01 07:07:53.537000+00:00
amazon-web-services
['http://s3browser.com/']
1
27,727,467
<p>Let us to work on these errors one by one.</p> <ul> <li><p>For the first error for command <code>aws s3 ls s3://arxiv.s3.amazonaws.com/pdf/</code>, that meaning you didn't set permission properly. Please login management console to check the permission. If can, could you please run the command <code>aws s3 ls</code>, do you get any buckets listed?</p></li> <li><p>for the second command <code>aws s3 ls --add-header="x-amz-request-payer:requester" ls s3:/ /arxiv/pdf/arXiv_pdf_manifest.xml</code>, the error gives the reason, <code>Unknown options</code>, and check in github <a href="https://github.com/aws/aws-cli" rel="nofollow">https://github.com/aws/aws-cli</a>, there is no this option <code>--add-header</code></p></li> <li><p>For the third command <code>aws get</code>, it should follow with the subcommand, which you missed. </p></li> </ul> <p>Here is the help output for <code>aws get</code></p> <pre><code>$ aws get help usage: aws [options] &lt;command&gt; &lt;subcommand&gt; [parameters] aws: error: argument command: Invalid choice, valid choices are: autoscaling | cloudformation cloudfront | cloudsearch cloudsearchdomain | cloudtrail cloudwatch | cognito-identity cognito-sync | datapipeline directconnect | dynamodb ec2 | elasticache elasticbeanstalk | elastictranscoder elb | emr iam | importexport kinesis | kms lambda | logs opsworks | rds redshift | route53 route53domains | sdb ses | sns sqs | storagegateway sts | support swf | s3api s3 | configure deploy | configservice help </code></pre> <p>The last command seems similar error which you run the command with wrong formation. Please DO review the awscli document first, especially s3 part.</p> <p><a href="http://docs.aws.amazon.com/cli/latest/userguide/cli-s3.html" rel="nofollow">Using Amazon S3 with the AWS Command Line Interface</a></p>
2015-01-01 00:21:59.593000+00:00
2015-01-01 00:21:59.593000+00:00
null
null
27,725,238
<p>I have tried a few programs to try to locate and download the arxiv.org requester pays bucket, but no success.<br> I emailed arxiv.org for help, but no response. How the HECK to do this on Windows 7x64? Does anyone have or know of a step-by-step tutorial on how to access third party requester pays buckets? I am willing to make one and post on here and on Youtube if I can just get started...</p> <p>The details I am trying to follow are on: </p> <p><a href="http://arxiv.org/help/bulk_data_s3" rel="nofollow">http://arxiv.org/help/bulk_data_s3</a></p> <p>I tried various programs (s3browser, bucketexplorer, cloudberry) with no success. It appears the requester pays buckets require more specific details to locate that the arxiv.org website is not providing.</p> <p>Now I have installed amazon cli</p> <p>My credentials are entered and I have confirmed that I can access my account buckets, created a couple of folders, and have sent and received a few files.</p> <p>When I try to locate the arxiv.org requester pays bucket: </p> <pre><code>aws s3 ls s3://arxiv.s3.amazonaws.com/pdf/ </code></pre> <p>I get the response:</p> <p>A client error (NoSuchBucket) occurred when calling the ListObjects operation: The specified bucket does not exist</p> <p>I also tried: </p> <pre><code>aws s3 ls --add-header="x-amz-request-payer:requester" ls s3:/ /arxiv/pdf/arXiv_pdf_manifest.xml </code></pre> <p>I get the responseL</p> <pre><code>Unknown options: --add-header=x-amz-request-payer:requester,s3://arxiv/pdf/arXiv _pdf_manifest.xml </code></pre> <p>I tried: </p> <pre><code>aws get "x-amz-request-payer:requester" arxiv/pdf/arXiv_pdf_1001_001.tar &gt; arXiv_pdf_1001_001.tar usage: aws [options] &lt;command&gt; &lt;subcommand&gt; [parameters] aws: error: argument command: Invalid choice, valid choices are: autoscaling | cloudformation etc etc </code></pre> <p>Also tried:</p> <pre><code>C:\z_amazonAWScli&gt;aws s3 "x-amz-request-payer:requester" ls s3://arxiv/pdf/ usage: aws [options] &lt;command&gt; &lt;subcommand&gt; [parameters] aws: error: argument subcommand: Invalid choice, valid choices are: ls | website etc etc </code></pre> <p>Several other variations, no response. Thanks in advance, miniscule</p> <p>Running aws s3 ls shows me my account folders. Running aws s3 ls s3://arxiv/pdf/arXiv_pdf_manifest.xml gives me A client error (AccessDenied) occurred when calling the ListObjects operation: Access Denied</p>
2014-12-31 18:57:29.447000+00:00
2015-01-01 09:02:19.060000+00:00
2015-01-01 07:07:53.537000+00:00
amazon-web-services
['https://github.com/aws/aws-cli', 'http://docs.aws.amazon.com/cli/latest/userguide/cli-s3.html']
2
23,326,107
<p>Your question involves quantification, and is one example of things that cannot be expressed as one query in regular SPARQL 1.0. (It may be expressed in SPARQL 1.1 as shown in Jeen Broekstra's answer, or as an OWL class.)</p> <p>Many SPARQL 1.0 implementations, though, have developped extensions to handle quantification. A commercial example is <a href="http://www.intellidimension.com/developers/library/sparql-extensions.aspx#quantification" rel="nofollow">Intellidimension Semantics Platform,</a> which would give you something like:</p> <pre><code>SELECT ?parent WHERE { ?child :hasParent ?parent FORALL(?child){ ?child :hasSchool "MIT" } } </code></pre> <p>An academic example is <a href="http://www.cs.ox.ac.uk/publications/publication3409-abstract.html" rel="nofollow">SPARQLog</a> from Oxford University Computing Lab. I am not aware that this system is available as an easy download, but the paper is freely available and provides insight into the difficulties of implementing quantification for SPARQL.</p> <p>As for your question about the limits of SPARQL, it is too general to answer in a few words, but here is a link to a relevant paper, again as far as SPARQL 1.0 is concerned: <a href="http://arxiv.org/abs/cs/0605124" rel="nofollow"><em>Semantics and Complexity of SPARQL</em></a></p>
2014-04-27 16:44:25.160000+00:00
2014-04-28 08:02:02.420000+00:00
2014-04-28 08:02:02.420000+00:00
null
23,322,329
<p>I would like to know how to express the following question in sparql:</p> <p>"Give me the parents whose every child goes to MIT" </p> <p>More generally, I would like to know what are the limits of query sparql please? What kinds of questions with answers in database <strong>cannot</strong> be formulated as sparql, please?</p> <p>Thank you for your help </p>
2014-04-27 10:39:46.163000+00:00
2014-04-28 08:02:02.420000+00:00
null
sparql
['http://www.intellidimension.com/developers/library/sparql-extensions.aspx#quantification', 'http://www.cs.ox.ac.uk/publications/publication3409-abstract.html', 'http://arxiv.org/abs/cs/0605124']
3
52,892,468
<p>These kind of problems happen pretty often and you shouldn't give up. First, of course, you should do another one or two checks if the code is all right - try to compare your code to other implementations, see how the loss function behave etc. If you are pretty sure your code is all fine - and, as you say that model can learn the task from time to time, it probably is - you should start experimenting with the hyper-parameters.</p> <p>Your problems seem to be connected to hyper-parameters like exploration technique, learning rate, the way you are updating the target networks and to the experience replay memory. I would not play around with the hidden layer sizes - find the values for which the model learned once and keep them fixed. </p> <ul> <li><strong>Exploration technique:</strong> I assume you use epsilon-greedy strategy. My advice would be to start with a high epsilon value (I usually start with 1.0) and decay it after each step or episode, but define an epsilon_min too. Starting with a low epsilon value may be the problem of different learning speeds and success rates - if you go full random, you always populate your memory with similar kind of transitions at the beginning. With lower epsilon rates at the start, there is a bigger chance for your model to not explore enough before the exploitation phase begins. </li> <li><strong>Learning rate:</strong> Make sure it is not too big. Smaller rate may lower the learning speed, but helps a learned model to not escape back from global minima to some local, worse ones. Also, adaptive learning rates such as these calculated with <a href="https://arxiv.org/abs/1412.6980" rel="noreferrer">Adam</a> might help you. Of course the batch size have an impact as well, but I would keep it fixed and worry about it only if the other hyper-parameter changes won't work. </li> <li><strong>Target network update (rate and value):</strong> This is an important one as well. You have to experiment a bit - not only how often do you perform the update, but also how much of the primary values you copy into the target ones. People often do a hard update each episode or so, but try doing soft updates instead if the first technique does not work. </li> <li><strong>Experience replay</strong>: Do you use it? You should. How big is your memory size? This is very important factor and the memory size can influence the stability and success rate (<a href="https://arxiv.org/abs/1712.01275" rel="noreferrer">A Deeper Look at Experience Replay</a>). Basically, if you notice instability of your algorithm, try a bigger memory size, and if it affects your learning curve a lot, try out the technique proposed in the mentioned paper.</li> </ul>
2018-10-19 12:36:00.437000+00:00
2018-10-19 12:36:00.437000+00:00
null
null
52,770,780
<p>I am trying to implement DQN and DDQN(both with experience reply) to solve OpenAI AI-Gym Cartpole Environment. Both of the approaches are able to learn and solve this problem sometimes, but not always.</p> <p>My network is simply a feed forward network(I've tried using 1 and 2 hidden layers). In DDQN I created one network in DQN, and two networks in DDQN, a target network to evaluate the Q value and a primary network to choose the best action, train the primary network, and copy it to target network after some episodes.</p> <p>The problem in DQN is: </p> <ul> <li>Sometimes it can achieve the perfect 200 score within 100 episodes, but sometimes it gets stuck and only achieves 10 score no matter how long it is trained. </li> <li>Also, in case of successful learning, the learning speed differ.</li> </ul> <p>The problem in DDQN is:</p> <ul> <li>It can learn to achieve 200 score, but then it seems to forget what's learned and the score drops dramatically.</li> </ul> <p>I've tried tuning batch size, learning rate, number of neurons in the hidden layer, the number of hidden layers, exploration rate, but instability persists. </p> <p>Are there any rule of thumb on the size of network and batch size? I think reasonably larger network and larger batch size will increase stability. </p> <p>Is it possible to make the learning stable? Any comments or references are appreciated!</p>
2018-10-12 01:00:40.840000+00:00
2022-07-29 02:24:37.090000+00:00
null
python|tensorflow|reinforcement-learning|q-learning
['https://arxiv.org/abs/1412.6980', 'https://arxiv.org/abs/1712.01275']
2
55,121,970
<p>In short - the model certainly does use word embeddings, they are just not pre-trained embeddings like Glove or word2vec; instead, the embeddings are randomly initialised and jointly trained along with the rest of the network.</p> <p>In the full description of the network in section A.2 of the original Bahdanau et al. paper, you'll see the word embedding matrices <code>E</code> described for both the encoder and decoder. How they were initialised is also described in section B.1.</p> <p>This usually works as well as or better than pre-trained embeddings in situations where you have enough data. That said, in a low-resource setting, it can help to initialise the embedding matrix with pre-trained embeddings. <a href="https://arxiv.org/abs/1804.06323" rel="nofollow noreferrer">This paper</a> might help you explore that idea in further detail.</p> <p>In addition, your statement that current implementations don't do this is not entirely accurate - while it's true that the embeddings are usually jointly trained by default, many existing neural MT toolkits have the option to initialise the embeddings with pre-trained vectors. For example, <a href="http://opennmt.net/OpenNMT-py/FAQ.html#how-do-i-use-pretrained-embeddings-e-g-glove" rel="nofollow noreferrer">OpenNMT-py</a>, <a href="https://marian-nmt.github.io/docs/#custom-embeddings" rel="nofollow noreferrer">Marian</a>.</p>
2019-03-12 12:52:31.837000+00:00
2019-03-12 12:59:18.603000+00:00
2019-03-12 12:59:18.603000+00:00
null
55,121,521
<p>In the paper <a href="https://arxiv.org/abs/1409.0473" rel="nofollow noreferrer">Neural Machine Translation by Jointly Learning to Align and Translate Bahdanau et. al.</a> why are there no word embeddings such as Glove or word2vec used? </p> <p>I understand that this was a 2014 paper, but the current implementations of the paper on github don't use any word embeddings as well?</p> <p>For trying to code the paper is using word embeddings reasonable?</p>
2019-03-12 12:30:14.207000+00:00
2019-03-12 12:59:18.603000+00:00
null
nlp|word-embedding|machine-translation|attention-model
['https://arxiv.org/abs/1804.06323', 'http://opennmt.net/OpenNMT-py/FAQ.html#how-do-i-use-pretrained-embeddings-e-g-glove', 'https://marian-nmt.github.io/docs/#custom-embeddings']
3
51,145,579
<p>A short answer:</p> <p>Except for your first line, the rest ones are all adaptive gradient descent optimizers which means they will automatically adjust learning rate based on some conditions during every step. So the learning rate given by you is just used to initialize.</p> <p>Take <code>AdamOptimizer</code> as an example, you can learn its detail in this <a href="https://arxiv.org/abs/1412.6980" rel="nofollow noreferrer">article</a>.</p>
2018-07-03 01:41:37.010000+00:00
2018-07-03 01:41:37.010000+00:00
null
null
51,144,993
<p>I'm reading this blog</p> <p><a href="https://smist08.wordpress.com/2016/10/04/the-road-to-tensorflow-part-10-more-on-optimization/" rel="nofollow noreferrer">https://smist08.wordpress.com/2016/10/04/the-road-to-tensorflow-part-10-more-on-optimization/</a></p> <p>where it mentions all the tensorflow's learning rates</p> <pre><code>optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step) optimizer = tf.train.AdadeltaOptimizer(starter_learning_rate).minimize(loss) optimizer = tf.train.AdagradOptimizer(starter_learning_rate).minimize(loss) # promising optimizer = tf.train.AdamOptimizer(starter_learning_rate).minimize(loss) # promising optimizer = tf.train.MomentumOptimizer(starter_learning_rate, 0.001).minimize(loss) # diverges optimizer = tf.train.FtrlOptimizer(starter_learning_rate).minimize(loss) # promising optimizer = tf.train.RMSPropOptimizer(starter_learning_rate).minimize(loss) # promising </code></pre> <p>It says that the learning rate you input is only the starter learning rate. Does that mean that if you change the learning rate in the middle of training, that change will have no effect because it's not using the starter learning rate anymore? </p> <p>I tried looking at the API docs and it doesn't specify this. </p>
2018-07-02 23:53:09.613000+00:00
2018-07-03 01:41:37.010000+00:00
null
python|tensorflow|machine-learning|deep-learning
['https://arxiv.org/abs/1412.6980']
1
39,919,835
<p>First you need some placeholders to put your training data (one batch)</p> <pre><code>x_input = tf.placeholder(tf.float32, [batch_size, truncated_series_length, 1]) y_output = tf.placeholder(tf.float32, [batch_size, truncated_series_length, 1]) </code></pre> <p>A LSTM need a state, which consists of two components, the hidden state and the cell state, very good guide here: <a href="https://arxiv.org/pdf/1506.00019.pdf" rel="nofollow">https://arxiv.org/pdf/1506.00019.pdf</a>. For every layer in the LSTM you have one cell state and one hidden state.</p> <p>The problem is that Tensorflow stores this in a LSTMStateTuple which you can not send into placeholder. So you need to store it in a Tensor, and then unpack it into a tuple:</p> <pre><code>state_placeholder = tf.placeholder(tf.float32, [num_layers, 2, batch_size, state_size]) l = tf.unpack(state_placeholder, axis=0) rnn_tuple_state = tuple( [tf.nn.rnn_cell.LSTMStateTuple(l[idx][0], l[idx][1]) for idx in range(num_layers)] ) </code></pre> <p>Then you can use the built-in Tensorflow API to create the stacked LSTM layer.</p> <pre><code>cell = tf.nn.rnn_cell.LSTMCell(state_size, state_is_tuple=True) cell = tf.nn.rnn_cell.MultiRNNCell([cell]*num_layers, state_is_tuple=True) outputs, state = tf.nn.dynamic_rnn(cell, x_input, initial_state=rnn_tuple_state) </code></pre> <p>From here you continue with the outputs to calculate logits and then a loss with respect to the <code>y_inputs</code>. </p> <p>Then you run each batch with the <code>sess.run</code>-command, with truncated backpropagation (good explanation here <a href="http://r2rt.com/styles-of-truncated-backpropagation.html" rel="nofollow">http://r2rt.com/styles-of-truncated-backpropagation.html</a>)</p> <pre><code> init_state = np.zeros((num_layers, 2, batch_size, state_size)) ...current_state... = sess.run([...state...], feed_dict={x_input:batch_in, state_placeholder:current_state ...}) current_state = np.array(current_state) </code></pre> <p>You will have to convert the state to a <code>numpy</code> array before feeding it again.</p> <p>Perhaps it is better to use a librarly like Tflearn or Keras instead?</p>
2016-10-07 14:35:56.437000+00:00
2016-10-10 06:43:56.280000+00:00
2016-10-10 06:43:56.280000+00:00
null
39,138,447
<p>what I have is the following, which I believe is a network with one hidden LSTM layer:</p> <pre><code># Parameters learning rate = 0.001 training_iters = 100000 batch_size = 128 display_step = 10 # Network Parameters n_input = 13 n_steps = 10 n_hidden = 512 n_classes = 13 # tf Graph input x = tf.placeholder("float", [None, n_steps, n_input]) y = tf.placeholder("float", [None, n_classes]) # Define weights weights = { 'out' : tf.Variable(tf.random_normal([n_hidden, n_classes])) } biases = { 'out' : tf.Variable(tf.random_normal([n_classes])) } </code></pre> <p>However, I am trying to build an LSTM network using TensorFlow to predict power consumption. I have been looking around to find a good example, but I could not find any model with 2 hidden LSTM layers. Here's the model that I would like to build:</p> <p>1 input layer, 1 output layer, 2 hidden LSTM layers(with 512 neurons in each), time step(sequence length): 10</p> <p>Could anyone guide me to build this using TensorFlow? ( from defining weights, building input shape, training, predicting, use of optimizer or cost function, etc), any help would be much appreciated.</p> <p>Thank you so much in advance!</p>
2016-08-25 06:47:10.637000+00:00
2017-07-10 17:06:23.170000+00:00
null
tensorflow|lstm
['https://arxiv.org/pdf/1506.00019.pdf', 'http://r2rt.com/styles-of-truncated-backpropagation.html']
2
50,558,626
<p>To start with, the <code>r2_score()</code> metric <strong>can be arbitrarily low.</strong> It need not be between -1 and 1 and does not correspond to the pearson correlation coefficient. A score of ~-10 then just means that the model is performing significantly worse than a model which outputs mean values (corresponding to an r2 score of 0).</p> <p>The <code>r2_score()</code> you chose is a fine preliminary metric for your model. The output looks strange not because there is a problem with the metric but because there is a problem with the model. In its current state, improving <code>r2_score()</code> will probably improve any other metric of interest. You might also be interested in the <strong>similarity between metrics</strong> on your test set and metrics on your train set -- the more similar they are the more likely your model is to generalize well (under the manifold hypothesis or any other framework avoiding no free lunch theorems). Depending on the intended application, you might care about the <strong>worst case scenario</strong> -- it might not matter if you have an r2 of 0.999 if the worst case is that a self driving car mistakes a pedestrian for a safe place to drive. You might be interested in a wide range of <strong>calibration scores</strong> -- if your model predicts a strong pattern with good accuracy but the <a href="https://www.analyticsvidhya.com/blog/2013/12/residual-plots-regression-model/" rel="nofollow noreferrer">residuals</a> demonstrate a strong bias in the model, something may very well be amiss. In general, the quality of a model depends strongly on its intended application, and your metrics should reflect your end goals. Using <a href="https://arxiv.org/abs/1712.01208" rel="nofollow noreferrer">The Case for Learned Index Structures</a> as an example, sometimes generalization is a bad thing and not what you want at all.</p> <p>Several potential problems exist in your approach. I'm not quite sure about metallurgy, but many chemistry problems have chaotic interaction dynamics and are <strong>not easily modeled with a naive approach</strong> which just slaps a model onto the input data (naive is not meant negatively here -- the naive approach is often a good starting point and may completely satisfy the requirements at hand).</p> <p>As a rule of thumb, <strong>most of the gains you will find are going to be in feature engineering.</strong> In the case of chemical problems, this can involve taking the outputs from standard solutions to the problem (which are obviously insufficient since you're going straight for machine learning) and treating them as features in your neural network. The idea is that even though no individual model is perfect, they're all right often enough and wrong in different ways so that the neural network is able to figure out how to combine them together to create a better answer. This is a type of ensemble technique.</p> <p>What data visualizations/analysis have you done? Do you have an idea of which features correspond to the output you're trying to predict? It's even possible that the input you have does not have enough information to predict your desired output. Have you examined the data before dropping the NaN values? Would a better imputation method yield additional gains? Your model is coded fine. <strong>Understanding your data is easily the most important task at hand.</strong></p> <p>Scikit-learn can have problems optimizing networks with small numbers of nodes in each layer. A grid search and cross validation procedure to help determine the optimal parameters may improve your model substantially. The solution might be sensitive to tolerances and other such parameters.</p>
2018-05-28 03:09:51.300000+00:00
2018-05-28 03:35:01.837000+00:00
2018-05-28 03:35:01.837000+00:00
null
50,558,581
<p><br> Today i'd like to ask you some help with the quality of my neural network. I've been working with a project to predict parameters in metallurgy. To make sure that my neural network is going on the right way i tried to use some functions of "Scikit-learn" like "score" and "r^2" but no success.</p> <p>With the actual code my "r²" is -10.42239374572942, this value is unreal because everybody knows the r² must be between -1 and 1.</p> <p>Anyone have any suggestion to evaluate my neural network? Why my code is not working?</p> <p>Thaks guys. See you.</p> <p>Follow above my code:</p> <pre><code># coding: utf-8 import pandas as pd import numpy as np #modulo de plot import matplotlib.pyplot as plt #modulo da rede propriamente dita from sklearn.neural_network import MLPRegressor #para testar a rede neural from sklearn.model_selection import train_test_split #para normalização from sklearn.preprocessing import StandardScaler #para testar a qualidade da rede neural from sklearn.metrics import mean_squared_error, r2_score #buscando o CSV com os dados do AF1-Gerdau df = pd.read_csv('Rede3.03.11.17_MOACIR_b.csv', delimiter=';', encoding = "ISO-8859-1" ) df2 = df.dropna(how='all') # ## Definindo as variáveis inputs e a resposta X = df2.drop(['Fuel Rate'], axis=1) #deixando todas as colunas exceto a variável resposta "Fuel Rate" y = df2['Fuel Rate'] #variável respota "Fuel Rare" # ## Normalizando os dados para uma melhor convergência scaler = StandardScaler() X_train, X_test, y_train, y_test = train_test_split(X, y) # Treinamento apenas com os dados de treino scaler.fit(X_train) # Aplicando a transformação de normalização dos dados: X_train = scaler.transform(X_train) X_test = scaler.transform(X_test) # ## Criando os parametros da RNA rna = MLPRegressor(hidden_layer_sizes=(13,13,13), max_iter=2000) # ## Treinando a RNA rna.fit(X_train,y_train) # ## Testando a rede y_predicted = rna.predict(X_test) # The coefficients print('Coefficients: \n', r2_score(y_test, y_predicted)) </code></pre>
2018-05-28 03:02:12.840000+00:00
2018-05-28 03:40:47.170000+00:00
null
python|pandas|scikit-learn|neural-network|evaluation
['https://www.analyticsvidhya.com/blog/2013/12/residual-plots-regression-model/', 'https://arxiv.org/abs/1712.01208']
2
48,932,000
<p>Yes. There are some Telegram bots for arXiv. The one candidate for what you're looking for is <a href="https://telegram.me/dailyarxiv_bot" rel="nofollow noreferrer">@dailyarXiv_bot</a> that sends you submitted articles everyday. Another famous option is <a href="https://github.com/carlosparaciari/ArXivBot" rel="nofollow noreferrer">@ArXivBot</a>. Another bot that I've just recently seen is <a href="https://github.com/Noxbru/arXiv_Kitten" rel="nofollow noreferrer">arXiv_kitten</a>.</p>
2018-02-22 16:05:52.163000+00:00
2018-02-22 16:05:52.163000+00:00
null
null
48,931,406
<p>I was wondering if there exists any Telegram bot that sends me arXiv articles <strong>everyday</strong>?</p> <p>I looked up the internet but I couldn't find. I need it since it's basically hard to browse arXiv everyday and read new articles. I am not sure if here is the best place to ask this question. So sorry if I am wrong. Thanks.</p>
2018-02-22 15:39:35.283000+00:00
2018-02-23 21:04:07.857000+00:00
2018-02-23 21:04:07.857000+00:00
telegram-bot
['https://telegram.me/dailyarxiv_bot', 'https://github.com/carlosparaciari/ArXivBot', 'https://github.com/Noxbru/arXiv_Kitten']
3
49,609,216
<p>There is another way you can visualize activations of your hidden layers, as described in the this paper : <a href="http://arxiv.org/pdf/1506.06579.pdf" rel="nofollow noreferrer">http://arxiv.org/pdf/1506.06579.pdf</a></p> <p>Check the following post for how it is implemented in MNIST dataset</p> <p><a href="https://medium.com/@awjuliani/visualizing-neural-network-layer-activation-tensorflow-tutorial-d45f8bf7bbc4" rel="nofollow noreferrer">https://medium.com/@awjuliani/visualizing-neural-network-layer-activation-tensorflow-tutorial-d45f8bf7bbc4</a></p> <p>Do let me know in the comments if you need any further clarification.</p>
2018-04-02 10:04:18.707000+00:00
2018-04-02 10:04:18.707000+00:00
null
null
49,575,909
<p>As the title says, I'm looking for a way to reverse the flow of a TensorFlow graph. The reason for this, is that I want to visualize the hidden layers of the graph given a logit vector for the output of the trained graph.</p> <p>For example, say that I have a fully connected graph given as follows (inspired by MNIST):</p> <pre><code>inputs = tf.placeholder(dtype=tf.float32, shape=[None, 784]) hidden_w1 = tf.get_variable('w1', [784,100], initializer=tf.random_normal_initializer) hidden_b1 = tf.get_variable('b1', [100], initializer=tf.random_normal_initializer) a1 = tf.matmul(inputs, hidden_w1) + hidden_b1 z1 = tf.nn.relu(a1) hidden_w2 = tf.get_variable('w2', [100,100], initializer=tf.random_normal_initializer) hidden_b2 = tf.get_variable('b2', [100], initializer=tf.random_normal_initializer) a2 = tf.matmul(z1, hidden_w2) + hidden_b2 z2 = tf.nn.relu(a2) output_w = tf.get_variable('w3', [100,10], initializer=tf.random_normal_initializer) output_b = tf.get_variable('b3', [10], initializer=tf.random_normal_initializer) a3 = tf.matmul(z2, output_w) + output_b output = tf.nn.relu(a3) loss = tf.nn.softmax_cross_entropy_with_logits_v2(labels=..., logits=output) train_op = tf.train.AdamOptimizer().minimize(loss) </code></pre> <p>Suppose I now train this graph and want to visualize hidden 1 when only the first output neuron is activated. The way I would do this would be to reverse the flow of the graph and feed a tensor <code>[1, 0, 0, 0, 0, 0, 0, 0, 0, 0]</code> from the output layer back through the reversed graph until I finally got the output of the hidden1 layer. I have tried to see if there is a way to do this in TensorFlow, but there seems to be little information about this. The way I would inutitively construct it is to add an operation sess.run_reverse() when running the graph as follows:</p> <pre><code>with tf.Session() as sess: while training: sess.run(train_op, feed_dict={inputs:...}) # finished training, reverse graph category_to_visualize = tf.one_hot(indices=0, depth=10) sess.run_reverse(hidden1, feed_dict={output:category_to_visualize}) </code></pre> <p>If this sort of operation doesn't exist or even is possible to get, however, I would instead construct separate operators for reversing the flow of the graph as follows:</p> <pre><code>output_reversed = tf.placeholder(dtype=tf.float32, shape=[1,10]) z3_reversed = tf.nn.relu(output_reversed) a3_reversed = tf.matrix_inverse(output_w)*(z3_reversed - output_b) z2_reversed = tf.nn.relu(a3_reversed) a2_reversed = tf.matrix_inverse(hidden_w2)*(z2_reversed - hidden_b2) z1_reversed = tf.nn.relu(a2_reversed) a1_reversed = tf.matrix_inverse(hidden_w1)*(z1_reversed - hidden_b1) </code></pre> <p>I realize that there might be logical flaws to this method that wouldn't make it possible. A couple of things I've overlooked is singular matrices and undefined inversion of ReLu when input is below 0 (ReLu, though, can be replaced by sigmoid for theoretically defined inversion of the entire input space). The core idea, though, is to visualize a feature map given a category - something I believe should be possible if a few assumptions are allowed.</p> <p>Anyways, please tell me if I'm thinking wrongly here, and if there is a way to reverse the graph!</p>
2018-03-30 14:04:40.013000+00:00
2018-04-02 10:04:18.707000+00:00
null
python|tensorflow
['http://arxiv.org/pdf/1506.06579.pdf', 'https://medium.com/@awjuliani/visualizing-neural-network-layer-activation-tensorflow-tutorial-d45f8bf7bbc4']
2
42,261,691
<p>This is the <a href="https://en.wikipedia.org/wiki/Subset_sum_problem" rel="nofollow noreferrer">Subset Sum Problem</a> and it is <a href="https://en.wikipedia.org/wiki/NP-complete" rel="nofollow noreferrer">NP-Complete</a>.</p> <blockquote> <p>... given a set of integers and an integer s, does any non-empty subset sum to s? ...</p> </blockquote> <p>Here:</p> <ul> <li><strong>$requiredDonation</strong> is the <strong>s</strong></li> <li><strong>$userDonation</strong> is the <strong>set of integers</strong></li> <li><strong>magicalFunction</strong> returns the subset of the <strong>set of integers</strong> that sum to <strong>s</strong>. (A non-false answer is the true of the <em>proper</em> NP-Complete question and the false is the false)</li> </ul> <p>The <a href="https://en.wikipedia.org/wiki/Subset_sum_problem" rel="nofollow noreferrer">wikipedia link</a> provides some solutions for subset sum problems and google search others.</p> <ul> <li><a href="https://arxiv.org/pdf/1507.02318.pdf" rel="nofollow noreferrer">Here</a> is one recomendation.</li> <li><a href="https://github.com/burento/Subset-Sum-Problem/blob/master/subset.php" rel="nofollow noreferrer">Here</a> is PHP based solver.</li> </ul> <p>You may also wish to evaluate the requirements, as the problem may get out of hand if the number of donations to search is large. (This is a property of NP-Complete problems. They can take a long time to solve exactly).</p>
2017-02-15 22:58:03.577000+00:00
2017-02-15 23:03:45.590000+00:00
2017-02-15 23:03:45.590000+00:00
null
42,261,339
<p>Here is my requirement of algorithm to find donater id which sum of donation<br> is equal to required donation </p> <pre><code>&lt;?php function magicalFunction($donationArray , $expactedValue ){ /*need some algorithm get key of elements from $donationArray which sum is equal to $expactedValue $*/ } $userDonation = array( '1' =&gt; 100, '4' =&gt; 8064, '5' =&gt; 578, '6' =&gt; 752, '21' =&gt; 512, '121' =&gt; 660, '152' =&gt; 135, '199' =&gt; 1350 ); $requiredDonation = 886; $selectedDonatee = magicalFunction( $userDonation , $requiredDonatee ); // return false $requiredDonation = 1465; $selectedDonatee = magicalFunction( $userDonation , $requiredDonatee ); // return array(5, 6, 152); ?&gt; </code></pre>
2017-02-15 22:31:22.117000+00:00
2017-02-16 13:18:06.930000+00:00
2017-02-16 13:18:06.930000+00:00
php|arrays|algorithm|function|loops
['https://en.wikipedia.org/wiki/Subset_sum_problem', 'https://en.wikipedia.org/wiki/NP-complete', 'https://en.wikipedia.org/wiki/Subset_sum_problem', 'https://arxiv.org/pdf/1507.02318.pdf', 'https://github.com/burento/Subset-Sum-Problem/blob/master/subset.php']
5
55,673,559
<p>I think the problem is that you zero the gradients right before calling backward, after the forward propagation. Note that for <a href="https://en.wikipedia.org/wiki/Automatic_differentiation" rel="nofollow noreferrer">automatic differentiation</a> you need the computation graph and the intermediate results that you produce during your forward pass.</p> <p>So zero the gradients <strong>before</strong> your TD error and target calculations! And not after you are finished your forward propagation.</p> <pre><code> for cur_step in range(1): action = M_Agent(state, flag) next_state, r = env.step(action) optimizer_M.zero_grad() # zero your gradient here # calculate TD Error TD_error = M_Agent.cal_td_error(r, next_state) # calculate Target target = torch.FloatTensor([M_Agent.cal_target(TD_error)]) logit = M_Agent.cal_logit() loss = criterion(logit, target) # update value Func TD_error.backward() optimizer_M.step() # update Actor Func loss.backward() optimizer_M.step() </code></pre> <p>To answer your second question, the DDPG algorithm for example uses the squared error (see the <a href="https://arxiv.org/pdf/1509.02971v2.pdf" rel="nofollow noreferrer">paper</a>).</p> <p>Another recommendation. In many cases large parts of the value and policy networks are shared in deep actor-critic agents: you have the same layers up to the last hidden layer, and use a single linear output for value prediction and a softmax layer for the action distribution. This is especially useful if you have high dimensional visual inputs, as it act as sort of a multi-task learning, but nevertheless you can try. (As I see you have a low-dimensional state vector).</p>
2019-04-14 08:54:56.847000+00:00
2019-04-14 09:03:53.370000+00:00
2019-04-14 09:03:53.370000+00:00
null
55,673,412
<p>I am trying to implement <strong>Actor-Critic learning atuomation algorithm</strong> that is not same as basic actor-critic algorithm, it's little bit changed.</p> <p>Anyway, I used Adam optimizer and implemented with pytorch</p> <p>when i backward TD-error for Critic first, there's no error. However, i backward loss for Actor, the error occured.</p> <blockquote> <p>--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) in 46 # update Actor Func 47 optimizer_M.zero_grad() ---> 48 loss.backward() 49 optimizer_M.step() 50 </p> <p>~\Anaconda3\lib\site-packages\torch\tensor.py in backward(self, gradient, retain_graph, create_graph) 100 products. Defaults to <code>False</code>. 101 """ --> 102 torch.autograd.backward(self, gradient, retain_graph, create_graph) 103 104 def register_hook(self, hook):</p> <p>~\Anaconda3\lib\site-packages\torch\autograd__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables) 88 Variable._execution_engine.run_backward( 89 tensors, grad_tensors, retain_graph, create_graph, ---> 90 allow_unreachable=True) # allow_unreachable flag 91 92 </p> <p>RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation</p> </blockquote> <p>above is the content of error</p> <p>I tried to find inplace operation, but I haven't found in my written code. I think i don't know how to handle optimizer.</p> <p>Here is main code:</p> <pre><code> for cur_step in range(1): action = M_Agent(state, flag) next_state, r = env.step(action) # calculate TD Error TD_error = M_Agent.cal_td_error(r, next_state) # calculate Target target = torch.FloatTensor([M_Agent.cal_target(TD_error)]) logit = M_Agent.cal_logit() loss = criterion(logit, target) # update value Func optimizer_M.zero_grad() TD_error.backward() optimizer_M.step() # update Actor Func loss.backward() optimizer_M.step() </code></pre> <p>Here is the agent network</p> <pre><code> # Actor-Critic Agent self.act_pipe = nn.Sequential(nn.Linear(state, 128), nn.ReLU(), nn.Dropout(0.5), nn.Linear(128, 256), nn.ReLU(), nn.Dropout(0.5), nn.Linear(256, num_action), nn.Softmax() ) self.val_pipe = nn.Sequential(nn.Linear(state, 128), nn.ReLU(), nn.Dropout(0.5), nn.Linear(128, 256), nn.ReLU(), nn.Dropout(0.5), nn.Linear(256, 1) ) def forward(self, state, flag, test=None): temp_action_prob = self.act_pipe(state) self.action_prob = self.cal_prob(temp_action_prob, flag) self.action = self.get_action(self.action_prob) self.value = self.val_pipe(state) return self.action </code></pre> <p>I wanna update each network respectively.</p> <p>and I wanna know that Basic <strong>TD Actor-Critic</strong> method uses TD error for loss?? or squared error between r+V(s') and V(s) ?</p>
2019-04-14 08:34:04.253000+00:00
2019-04-14 09:03:53.370000+00:00
2019-04-14 08:42:06.083000+00:00
optimization|error-handling|deep-learning|pytorch|reinforcement-learning
['https://en.wikipedia.org/wiki/Automatic_differentiation', 'https://arxiv.org/pdf/1509.02971v2.pdf']
2
64,029,458
<p>The <a href="https://arxiv.org/pdf/1503.03832.pdf" rel="nofollow noreferrer">FaceNet</a> work should be a good start. The network does a good feature matching for the facial data. Even though the face-compare library uses the same model, it would be good if you can fine-tune the FaceNet model on another dataset and evaluate with respect to the output form face-compare.</p> <p>Apart from that, different variants of siamese architecture can be tried for feature matching. If you want to compare the matching, try getting the triplet loss value for set of images.</p>
2020-09-23 13:49:21.750000+00:00
2020-09-23 16:46:07.683000+00:00
2020-09-23 16:46:07.683000+00:00
null
64,029,183
<p>I try to make the correspondence between two faces and give as a result if two faces match or not. To do this, I did some research and I found the face comparison package (<a href="https://pypi.org/project/face-compare/" rel="nofollow noreferrer">https://pypi.org/project/face-compare/</a>) that allows me to do this, and it works very well which is based on FaceNet. But here, I want to compare the accuracy of this solution with other solutions to choose the best one. Can anyone have an idea of other solutions (open source or commercial) that can help me for this benchmark</p>
2020-09-23 13:33:21.377000+00:00
2020-09-23 16:46:07.683000+00:00
2020-09-23 15:18:08.563000+00:00
opencv|face-recognition|conv-neural-network|facenet
['https://arxiv.org/pdf/1503.03832.pdf']
1
49,593,229
<p>This is not the average JS question! Thanks for the links, it's a really interesting paper. I can't claim to be an expert, I have only done toy GA problems, but I did read this paper and related ones. Here is what I understand:</p> <ol> <li><p>I think all you need to worry about is whether a parent, by mutation, produces the same novel gene more than once in a generation. That is, two children, whose gene with the newest innovation number are identical. You can cull those right away. I think they say that it is possible for the same gene to appear in two species at the same time, and they basically say that's fine, that's rare enough not to worry about.</p></li> <li><p>I can find at least one reason: "In NEAT, a bias is a node that can connect to any node other than inputs."</p></li> <li>I believe your question is "must nodes have an innovation number to do crossover?" The answer is no. In the original paper (e.g. Figure 4) they show crossover implemented in a way where only connections have innovation numbers.</li> <li>If you want to change the mutation function to be architecture aware, rather than avoiding recurrent structure, you might want to explicitly add structures you do want. Suppose you want to avoid recurrent connections because you are evolving an image classifier, and you know that convolutions are more suited to the task. In this case, you want your mutation function to be able to add/remove <em>layers</em> (and the needed connections). This was <a href="https://arxiv.org/pdf/1703.01041.pdf" rel="nofollow noreferrer">explored in detail last year</a> by Google Brain:</li> </ol> <blockquote> <p>Some of the mutations acting on this DNA are reminiscent of NEAT. However, instead of single nodes, one mutation can insert whole layers—i.e. tens to hundreds of nodes at a time. We also allow for these layers to be removed, so that the evolutionary process can simplify an architecture in addition to complexifying it.</p> </blockquote> <p>Based on your comment about your motivation for question 4, I think you are mistaken. In the XOR example in the original paper, figure 5, they show a starting phenotype that involves no hidden layer. This starting phenotype is not a solution to the XOR problem, but it provides a good starting point: "NEAT is very consistent in finding a solution. It did not fail once in 100 simulations." That is without any penalization for recurrence.</p>
2018-03-31 22:59:52.227000+00:00
2018-04-02 16:59:34.253000+00:00
2018-04-02 16:59:34.253000+00:00
null
49,589,689
<p>I've recently read the original <a href="http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf" rel="nofollow noreferrer">paper</a> about NeuroEvolution of Augmenting Topologies by Kenneth O. Stanley and am now trying to prototype it myself in JavaScript. I stumbled across a few questions I can't answer.</p> <hr> <h2>My questions:</h2> <ol> <li><p>What is the definition of "structural innovation", and how do I store these so I can check if an innovation has already happened before?</p> <blockquote> <p>However, by keeping a list of the innovations that occurred in the current generation, it is possible to ensure that when the same structure arises more than once through independent mutations in the same generation, each identical mutation is assigned the same innovation number</p> </blockquote></li> <li><p>Is there a reason for storing the type of a node (input, hidden, output)?</p></li> <li><p>In the original paper, only connections have an innovation number, but in <a href="http://www.automatonsadrift.com/neat/" rel="nofollow noreferrer">other sources</a>, nodes do as well. Is this necessary for crossover? (This has already been asked <a href="https://ai.stackexchange.com/questions/5496/neat-innovation-for-connection-genes-only">here.</a>)</p></li> <li><p>How could I limit the mutation functions to not add recurrent connections?</p></li> </ol> <p>I think that's it for now. All help is appreciated.</p> <hr> <h2>The relevant parts of my code:</h2> <h3>Genome</h3> <pre><code>class Genome { constructor(inputs, outputs) { this.inputs = inputs; this.outputs = outputs; this.nodes = []; this.connections = []; for (let i = 0; i &lt; inputs + outputs; i++) { this.nodes.push(new Node()); } for (let i = 0; i &lt; inputs; i++) { for (let o = 0; o &lt; outputs; o++) { let c = new Connection(this.nodes[i], this.nodes[inputs + o], outputs * i + o); this.connections.push(c); } } innovation = inputs * outputs; } weightMutatePerturb() { let w = this.connections[Math.floor(random(this.connections.length))].weight; w += random(-0.5, 0.5); } weightMutateCreate() { this.connections[Math.floor(random(this.connections.length))].weight = random(-2, 2); } connectionMutate() { let i = this.nodes[Math.floor(random(this.nodes.length))]; let o = this.nodes[Math.floor(random(this.inputs, this.nodes.length))]; let c = Connection.exists(this.connections, i, o); if (c) { c.enabled = true; } else { this.connections.push(new Connection(i, o, innovation)); innovation++; } } nodeMutate() { let oldCon = this.connections[Math.floor(Math.random(this.connections.length))]; oldCon.enabled = false; let newNode = new Node(); this.nodes.push(newNode); this.connections.push(new Connection(oldCon.input, newNode, innovation, 1)); innovation++; this.connections.push(new Connection(newNode, oldCon.output, innovation, oldCon.weight)); innovation++; } } </code></pre> <h3>Node</h3> <pre><code>class Node { constructor() { this.value = 0; this.previousValue = 0; } } </code></pre> <h3>Connection</h3> <pre><code>class Connection { constructor(input, output, innov, weight) { this.input = input; this.output = output; this.innov = innov; this.weight = weight ? weight : random(-2, 2); this.enabled = true; } static exists(connections, i, o) { for (let c = 0; c &lt; connections.length; c++) { if (connections[c].input === i &amp;&amp; connections[c].output === o) { return connections[c]; } } return false; } } </code></pre> <p>All answers an sources are welcome. (You are an awesome person!)</p>
2018-03-31 15:57:49.883000+00:00
2018-04-03 08:18:35.573000+00:00
2018-04-01 18:20:37.633000+00:00
javascript|neural-network|genetic-algorithm|recurrent-neural-network|evolutionary-algorithm
['https://arxiv.org/pdf/1703.01041.pdf']
1
55,293,572
<p>Conjugate gradient and quasi-Newton algorithms are still gradient descent algorithms. Backpropagation (or backprop) is <a href="https://idontgetoutmuch.wordpress.com/2013/10/13/backpropogation-is-just-steepest-descent-with-automatic-differentiation-2/" rel="noreferrer">nothing more than a fancy name</a> to a gradient computation. </p> <p>However, the original question of alternatives to backprop is very important. One of the recent alternatives, for example, is <a href="http://arxiv.org/abs/1602.05179" rel="noreferrer">equilibrium propagation</a> (or shortly eqprop).</p>
2019-03-22 05:45:46.347000+00:00
2019-03-22 05:45:46.347000+00:00
null
null
55,287,004
<p>I know a neural network can be trained using gradient descent and I understand how it works.</p> <p>Recently, I stumbled upon other training algorithms: conjugate gradient and quasi-Newton algorithms. I tried to understand how they work but the only good intuition I could get is that they use higher order derivative.</p> <p>My questions are the following: are those alternative algorithms I mentioned fundamentally different from a backpropagation process where weights are adjusted by using the gradient of the loss function? If not, is there an algorithm to train a neural network that is fundamentally different from the mechanism of backpropagation?</p> <p>Thanks</p>
2019-03-21 18:28:27.360000+00:00
2020-03-09 23:00:06.933000+00:00
null
neural-network|backpropagation|gradient-descent
['https://idontgetoutmuch.wordpress.com/2013/10/13/backpropogation-is-just-steepest-descent-with-automatic-differentiation-2/', 'http://arxiv.org/abs/1602.05179']
2
59,128,205
<p>Since your data is temporal, I would recommend using a model specifically intended for processing temporal data. As you mention, <a href="https://keras.io/layers/recurrent/#lstm" rel="nofollow noreferrer">LSTM</a> is quite popular but Keras also has implementation of <a href="https://keras.io/layers/recurrent/#gru" rel="nofollow noreferrer">GRU</a> and you can also try <a href="https://github.com/philipperemy/keras-tcn" rel="nofollow noreferrer">Temporal Convolution Networks (TCNs)</a> which use simple causal convolutions and avoid the complicated memory/gating structures of LSTM and GRU and have been shown to be more effective on some problems in <a href="https://arxiv.org/abs/1803.01271" rel="nofollow noreferrer">this paper</a>.</p> <p>You will be looking for many-to-one temporal structure since you are taking an input sequence and predicting the next timestep. See <a href="https://stackoverflow.com/questions/43034960/many-to-one-and-many-to-many-lstm-examples-in-keras">this post</a> for help on implementing that with LSTMs. A key takeaway is there is an argument on the Keras temporal models of <code>return_sequences</code>, this for you should be set to <code>False</code>. The temporal models process the time dimension for you and, in the case of LSTMs, capture temporal dependencies by maintaining an internal memory. TCNs achieve similar behavior by performing 1-D convolutions, but causally in the sense that information from the past cannot leak into the future.</p> <p>I would recommend starting with LSTM as you will find the most resources on blogs and SO questions about using them, and then you can try other models if you're not getting the results you want. I do not recommend using only dense layers, as they are not going to handle temporal relations properly, and I would also disagree with @Solvalou regarding 2D convolutions, because you are mixing temporal and spatial dimensions which will more likely just confuse your network. If you do convolutions, the causal 1-D convolutions of TCN should give you what you want.</p>
2019-12-01 17:41:29.237000+00:00
2019-12-01 17:41:29.237000+00:00
null
null
59,125,775
<p>I try to build a simple NN for timeseries analysis. So far I add only Dense layers (but be welcome to comment about LSTM etc. if this is what you prefer). </p> <p>My input is in the usual format {samples, time steps, features}, let's say {1000, 100, 3} and I want a single-step output. So far I cannot understand <strong>whether I should flatten the data, and where</strong>. </p> <p>The results change if I don't flatten, if I do before the last layer, and if I do before the first layer. But I have no way to tell yet if any of these is the correct one. </p> <p>A good discussion can be found under <a href="https://stackoverflow.com/questions/43237124/what-is-the-role-of-flatten-in-keras">this question</a>. However, please note that I am specifically interested in <strong>timeseries</strong>. So, I wonder if flattening before the first layer might in some way remove the info needed for time-dependence... </p>
2019-12-01 13:03:23.447000+00:00
2019-12-01 17:41:29.237000+00:00
null
machine-learning|keras|neural-network|time-series|flatten
['https://keras.io/layers/recurrent/#lstm', 'https://keras.io/layers/recurrent/#gru', 'https://github.com/philipperemy/keras-tcn', 'https://arxiv.org/abs/1803.01271', 'https://stackoverflow.com/questions/43034960/many-to-one-and-many-to-many-lstm-examples-in-keras']
5
48,448,831
<p>You might want to read my paper <a href="https://arxiv.org/abs/1801.07779" rel="nofollow noreferrer">The WiLI benchmark dataset for written language identification</a> and try <a href="https://github.com/MartinThoma/lidtk" rel="nofollow noreferrer"><code>lidtk</code></a>.</p> <p>TL;DR: Give CLD-2 a try.</p>
2018-01-25 17:35:43.127000+00:00
2018-01-25 17:35:43.127000+00:00
null
null
8,157,331
<p>I am using <a href="http://code.google.com/p/tesseract-ocr/" rel="nofollow noreferrer">tesseract</a> for OCR, mainly on invoices. However, tesseract requires to specify the language before it starts processing a file. </p> <p>I thought I am going to perform ocr based on a predefined default language. Then I'd like use the resulting text to check which language is used. If it is not the default language, I process it again in order to get a better result from tesseract. </p> <p>But how can I implement a language detection algorithm? Is there a C++ library I could use?</p>
2011-11-16 19:15:16.740000+00:00
2018-01-25 17:36:06.493000+00:00
2018-01-25 17:36:06.493000+00:00
c++|nlp|ocr|language-detection
['https://arxiv.org/abs/1801.07779', 'https://github.com/MartinThoma/lidtk']
2
44,350,483
<p>There's no inherent guarantee in Word2Vec/Doc2Vec that the generated set of vectors is symmetrically distributed around the origin point. They could be disproportionately in some directions, which would yield the results you've seen. </p> <p>In a few tests I just did on the toy-sized dataset ('lee corpus') used in the bundled gensim <code>docs/notebooks/doc2vec-lee.ipynb</code> notebook, checking the cosine-similarities of all documents against the first document, it vaguely seems that:</p> <ol> <li>using hierarchical-softmax rather than negative sampling (<code>hs=1, negative=0</code>) yields a balance between >0.0 and &lt;0.0 cosine-similarities that is closer-to (but not yet quite) half and half</li> <li>using a smaller number of negative samples (such as <code>negative=1</code>) yields a more balanced set of results; using a larger number (such as <code>negative=10</code>) yields relatively more >0.0 cosine-similarities</li> </ol> <p>While not conclusive, this is mildly suggestive that the arrangement of vectors may be influenced by the <code>negative</code> parameter. Specifically, typical negative-sampling parameters, such as the default <code>negative=5</code>, mean words will be trained more times as non-targets, than as positive targets. That <em>might</em> push the preponderance of final coordinates in one direction. (More testing on larger datasets and modes, and more analysis of how the model setup could affect final vector positions, would be necessary to have more confidence in this idea.)</p> <p>If for some reason you wanted a more balanced arrangement of vectors, you could consider transforming their positions, post-training. </p> <p>There's an interesting recent paper in the word2vec space, <a href="https://arxiv.org/abs/1702.01417" rel="nofollow noreferrer">"All-but-the-Top: Simple and Effective Postprocessing for Word Representations"</a>, that found sets of trained word-vectors don't necessarily have a 0-magnitude mean – they're on average in one direction from the origin. And further, this paper reports that subtracting the common mean (to 're-center' the set), and also removing a few other dominant directions, can improve the vectors' usefulness for certain tasks. </p> <p>Intuitively, I suspect this 'all-but-the-top' transformation might serve to increase the discriminative 'contrast' in the resulting vectors. </p> <p>A similar process <em>might</em> yield similar benefits for doc-vectors – and would likely make the full set of cosine-similarities, to any doc-vector, more balanced between >0.0 and &lt;0.0 values.</p>
2017-06-04 03:22:51.607000+00:00
2017-06-05 14:51:37.860000+00:00
2017-06-05 14:51:37.860000+00:00
null
44,345,576
<p>I have calculated document similarities using Doc2Vec.docvecs.similarity() in gensim. Now, I would either expect the cosine similarities to lie in the range [0.0, 1.0] if gensim used the absolute value of the cosine as the similarity metric, or roughly half of them to be negative if it does not.</p> <p>However, what I am seeing is that <em>some</em> similarities are negative, but they are very rare – less than 1% of pairwise similarities in my set of 30000 documents.</p> <p>Why are almost all of the similarities positive?</p>
2017-06-03 15:29:31.233000+00:00
2017-06-05 14:51:37.860000+00:00
null
python|gensim|word2vec|doc2vec
['https://arxiv.org/abs/1702.01417']
1
41,735,496
<p>Natural Language Processing (NLP) is a Text Based approach to Machine Comprehension. There are some very good papers that may give you a place to start:</p> <p><a href="https://openreview.net/pdf?id=B1-q5Pqxl" rel="nofollow noreferrer">https://openreview.net/pdf?id=B1-q5Pqxl</a></p> <p>and</p> <p><a href="https://arxiv.org/pdf/1611.09830v2.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1611.09830v2.pdf</a></p> <p>A MATCH-LSTM Neural Network Architecture seems to be the current state of the Art for as you say:</p> <blockquote> <p>My long term goal is within the next 3 years is to have my program distinguish sentence structure, then the program would be able to piece together the nouns, verbs, adverbs, etc., etc.,. So it can create its own sentences to ask the user a question.</p> </blockquote> <p>Some example code was published by Wang and Jiang : <a href="https://github.com/shuohangwang/SeqMatchSeq" rel="nofollow noreferrer">https://github.com/shuohangwang/SeqMatchSeq</a></p> <p>Your goal is board, but I believe achievable. Massive milestones have been reached already. Good Luck!</p>
2017-01-19 06:50:35.717000+00:00
2017-01-19 06:59:32.060000+00:00
2017-01-19 06:59:32.060000+00:00
null
37,911,869
<p>I am not sure where to post this. However, it's a general question. I program voice activated software, and I had a thought. </p> <p>If I was to program a speech neural network. What options would be best to take?</p> <p>I know that Aforge has machine learning with Fuzzy Logic. However, if I wanted to start from scratch.</p> <p>I would be using back propagation, and possibly recursive learning. Would there be a way I could extract the default speech sound wave files that MS Speech uses, instead of recording every single word in the English language into a sound wave. </p> <p>I have programmed feed forward and back propagation neural networks before.</p> <p>My question is at this moment, is there a way to pull the sound wave files to implement the inputs, or will I need to record each word into a sound wave?</p> <p>My long term goal is within the next 3 years is to have my program distinguish sentence structure, then the program would be able to piece together the nouns, verbs, adverbs, etc., etc.,. So it can create its own sentences to ask the user a question.</p> <p>I don not want to use open source except for the sound waves. I can handle the coding. I just need to know if I can pull MS Speech sound waves or do I need to record them myself?</p>
2016-06-19 21:08:04.733000+00:00
2017-01-19 06:59:32.060000+00:00
null
c#|neural-network
['https://openreview.net/pdf?id=B1-q5Pqxl', 'https://arxiv.org/pdf/1611.09830v2.pdf', 'https://github.com/shuohangwang/SeqMatchSeq']
3
50,472,527
<p>RDF stores can be considered as a subclass of graph databases:</p> <ol> <li><p>The central RDF 1.1 notion is <a href="https://www.w3.org/TR/rdf11-concepts/#section-rdf-graph" rel="nofollow noreferrer">RDF graph</a>.</p></li> <li><p>Many triplestores have word 'graph' in their names: <a href="/questions/tagged/graphdb" class="post-tag" title="show questions tagged &#39;graphdb&#39;" rel="tag">graphdb</a>, <a href="/questions/tagged/blazegraph" class="post-tag" title="show questions tagged &#39;blazegraph&#39;" rel="tag">blazegraph</a>, <a href="/questions/tagged/allegrograph" class="post-tag" title="show questions tagged &#39;allegrograph&#39;" rel="tag">allegrograph</a><br> (some of them are not only RDF stores though).</p></li> </ol> <p>Obviously, there are differences between the RDF model and other graph database models. These differences are described e.g. in <a href="https://arxiv.org/pdf/1801.00036.pdf" rel="nofollow noreferrer">An introduction to Graph Data Management</a> by Renzo Angles and Claudio Gutierrez.</p> <hr> <p>See also <a href="http://www.snee.com/bobdc.blog/2018/04/reification-is-a-red-herring.html" rel="nofollow noreferrer">Reification is red herring</a> by Bob DuCharme.</p>
2018-05-22 16:30:26.307000+00:00
2018-05-25 15:29:31.833000+00:00
2018-05-25 15:29:31.833000+00:00
null
50,468,986
<p>After reading some articles about NoSQL databases, I found that there are 4 types of NoSQL databases and for every type there are NoSQL databases.</p> <p>I understood that NoSQL is Not Only SQL; it means every database that use other query language, but I am confusing why RDF stores are not with this selection of these types (Key/value, Document, Column and Graph).</p>
2018-05-22 13:29:18.183000+00:00
2018-05-25 15:29:31.833000+00:00
2018-05-23 16:31:39.053000+00:00
database|nosql|rdf
['https://www.w3.org/TR/rdf11-concepts/#section-rdf-graph', '/questions/tagged/graphdb', '/questions/tagged/blazegraph', '/questions/tagged/allegrograph', 'https://arxiv.org/pdf/1801.00036.pdf', 'http://www.snee.com/bobdc.blog/2018/04/reification-is-a-red-herring.html']
6
63,122,436
<p>Concerning your last question about estimating CI ranges, there are three common methods for ML estimators:</p> <ol> <li>Variance estimation from the inverted Hessian matrix.</li> <li>Jackknife estimator for the variance (simpler and more stable, if the Hessian is estimated numerically, but computationally more expensive)</li> <li>Bootstrap CIs (the computatianally most expensive approach).</li> </ol> <p>For bootstrap CIs, you do not need to implement them yourself (bias correction, e.g. can be tricky), but can rely on the R library <em>boot</em>.</p> <p>Incidentally, I have written a summary with R code for all three approaches two years ago: <a href="https://arxiv.org/abs/1807.03582" rel="nofollow noreferrer">Construction of Confidence Intervals</a> (see section 5). For the method utilizing the Hessian Matrix, e.g., the outline is as follows:</p> <pre><code>lnL &lt;- function(theta1, theta2, ...) { # definition of the negative (!) # log-likelihood function... } # starting values for the optimization theta0 &lt;- c(start1, start2, ...) # optimization p &lt;- optim(theta0, lnL, hessian=TRUE) if (p$convergence == 0) { theta &lt;- p$par covmat &lt;- solve(p$hessian) sigma &lt;- sqrt(diag(covmat)) } </code></pre> <p>The function <em>mle</em> from <em>stats4</em> already wraps the covrainace matrix estimation and retruns it in <em>vcov</em>. In the practical use cases in which I have tried this (paired comparison models), though, this estimation was rather unstable, and I have resorted to the jackknife method instead.</p>
2020-07-27 19:43:27.487000+00:00
2020-07-28 11:32:36.850000+00:00
2020-07-28 11:32:36.850000+00:00
null
63,117,891
<p>I have been trying to generate R code for maximum likelihood estimation from a log likelihood function in a <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/risa.12880" rel="nofollow noreferrer">paper</a> (equation 9 in page 609). Authors in the paper estimated it using MATLAB, which I am not familiar with. So I tried to generate codes in R.</p> <p>Here is the snapshot of the log likelihood function in the paper:</p> <p><a href="https://i.stack.imgur.com/rrZz1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rrZz1.png" alt="enter image description here" /></a></p> <p>, where</p> <p><em>r</em>: Binary decision (0 or 1) indicating infested plant(s) detection (1) or not (0).</p> <p><em>e</em>: Inspection efficiency. This is known.</p> <p><em>n</em>: Sample size</p> <p>The overall objective is to estimate plant infestation rate (gamma: γ) and epsilon (<em>e</em>) based on binary decision of presence and absence of infested plants instead of using infested plant(s) detected. So, the function has only binary information (<em>r</em>) of infested plant detection and sample size. Since epsilon (<em>e</em>) is known or fixed, the actual goal is to estimate gamma (γ) in a population.</p> <p>Another objective is to compare estimated infestation rates from above with ones in hypergeometric sampling formula in <a href="https://pubmed.ncbi.nlm.nih.gov/19139495/" rel="nofollow noreferrer">another paper</a> (in page 6). The formula is:</p> <p><a href="https://i.stack.imgur.com/ZRk30.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZRk30.png" alt="enter image description here" /></a></p> <p>This formula generates required sample size to detect infested plants with selected probability (e.g., 95) given an infested rate. For example:</p> <pre><code># Sample size calculation function fosgate.sample1 &lt;- function(box, p, ci){ # Note: box represent total plant number ninf &lt;- p*box sample.size &lt;- round(((1-(1-ci)^(1/ninf))*(box-(ninf-1)/2))) #sample.size &lt;- ceiling(((1-(1-ci)^(1/ninf))*(box-(ninf-1)/2))) sample.size } fosgate.sample1(box=100, p = .05, ci = .95) # where box: population or total plants, p: infestation rate, and ci: probability of detection ## 44 </code></pre> <p>The idea is if sample size (e.g., 44) and binary decision data are provided the log-likelihood function can be used to estimate infestation rate and the rate may be close to anticipated rate (e.g., .05). <strong>Ultimately, I would like to compare plant infestation rates (gamma: γ) estimated from the log likelihood function above and D/N in the sample size calculation formula (second) or p in the sample size code below.</strong></p> <p>I generated R code for the log-likelihood described above.</p> <pre><code>### MLE with stat4 library(stats4) # Log-likelihood function plant.inf.lik &lt;- function(inf.rate){ logl &lt;- suppressWarnings( sum((1-insp.result)*n*log(1-inf.rate) + insp.result*log(1-(1-inf.rate)^n)) ) return(-logl) } </code></pre> <p>Using the sample size function (i.e., fosgate.sample1) I generated sample sizes for various cases of total plant (or box) and anticipated detection rate (p) in the function. Since I am also interested in error/confidence ranges of estimated plant infestation rates, I used bootstrapping to calculate range of estimates (I am not sure if this is appropriate/acceptable). Here is the final code I generated:</p> <pre><code>### MLE and CI with bootstrapping with multiple scenarios plant &lt;- c(100, 500, 1000, 5000, 10000, 100000) # Total plant number ir &lt;- seq(.01, .2, by = .01) # Plant infestation rate df.result &lt;- data.frame(expand.grid(plant=plant, inf.rate = ir)) df.result$sample.size &lt;- fosgate.sample1(box=df.result$plant, p=df.result$inf.rate, ci=.95) # Sample size df.result$insp.result &lt;- 1000 # Shipment number (can be replaced with random integers) df.result &lt;- df.result[order(df.result$plant, df.result$inf.rate, df.result$sample.size), ] rownames(df.result) &lt;- 1:nrow(df.result) df.result$est.mean &lt;- 0 #df.result$est.median &lt;- 0 df.result$est.lower.ci &lt;- 0 df.result$est.upper.ci &lt;- 0 df.result$nsim &lt;- 0 str(df.result) head(df.result) # Looping est &lt;- rep(NA, 1000) for(j in 1:nrow(df.result)){ for(i in 1:1000){ insp.result &lt;- sample(c(rep(1, df.result$insp.result[j]-df.result$insp.result[j]*df.result$inf.rate[j]), rep(0, df.result$insp.result[j]*df.result$inf.rate[j]))) ir &lt;- df.result$inf.rate[j] n &lt;- df.result$sample.size[j] insp.result &lt;- sample(insp.result, replace = TRUE) est[i] &lt;- mle(plant.inf.lik, start = list(inf.rate = ir*.9), method = &quot;BFGS&quot;, nobs = length(insp.result))@coef df.result$est.mean[j] &lt;- mean(est, na.rm = TRUE) # df.result$est.median[j] &lt;- median(est, na.rm = TRUE) df.result$est.lower.ci[j] &lt;- quantile(est, prob = .025, na.rm = TRUE) df.result$est.upper.ci[j] &lt;- quantile(est, prob = .975, na.rm = TRUE) df.result$nsim[j] &lt;- length(est) } } # Significance test result sig &lt;- ifelse(df.result$inf.rate &gt;= df.result$est.lower.ci &amp; df.result$inf.rate &lt;= df.result$est.upper.ci, &quot;no sig&quot;, &quot;sig&quot;) table(sig) # Plot library(ggplot2) library(reshape2) df.result$num &lt;- ave(df.result$inf.rate, df.result$plant, FUN=seq_along) df.result.m &lt;- melt(df.result, id.vars=c(&quot;plant&quot;, &quot;sample.size&quot;, &quot;insp.result&quot;, &quot;est.lower.ci&quot;, &quot;est.upper.ci&quot;, &quot;nsim&quot;, &quot;num&quot;)) df.result.m$est.lower.ci &lt;- ifelse(df.result.m$variable == &quot;inf.rate&quot;, NA, df.result.m$est.lower.ci) df.result.m$est.upper.ci &lt;- ifelse(df.result.m$variable == &quot;inf.rate&quot;, NA, df.result.m$est.upper.ci) str(df.result.m) ggplot(data = df.result.m, aes(x = num, y = value, group=variable, color=variable, shape=variable))+ geom_point()+ geom_errorbar(aes(ymin = est.lower.ci, ymax = est.upper.ci), width=.5)+ scale_y_continuous(breaks = seq(0, .2, .02))+ xlab(&quot;Index&quot;)+ ylab(&quot;Plant infestation rate&quot;)+ facet_wrap(~plant, ncol = 3) </code></pre> <p>When I ran the code, I was able to obtain results and to compare estimated (est.mean) and anticipated (inf.rate) infestation rates as shown in the plot below.</p> <p><a href="https://i.stack.imgur.com/8yBvU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8yBvU.png" alt="enter image description here" /></a></p> <p>If results are correct, plot indicates that estimation looks fine but off for greater infestation rates.</p> <p>Also, I always got warning messages without &quot;suppressWarnings&quot; function and occasionally error messages below. I have no clue how to fix them.</p> <pre><code>## Warning messages ## 29: In log(1 - (1 - inf.rate)^n) : NaNs produced ## 30: In log(1 - inf.rate) : NaNs produced ## Error message (occasionally) ## Error in solve.default(oout$hessian) : ## Lapack routine dgesv: system is exactly singular: U[1,1] = 0 </code></pre> <p>My questions are:</p> <ul> <li>Is R function (plant.inf.lik) for maximum likelihood estimation of the log-likelihood function appropriate?</li> <li>Should I take care of warning and error messages? If yes, how? Again, I have no clue how to fix...</li> <li>Is bootstrapping (resampling?) method appropriate to estimate CI ranges and/or standard error?</li> </ul> <p>I found <a href="https://math.stackexchange.com/questions/40319/maximum-likelihood-estimate-of-hypergeometric-distribution-parameter">this link</a> useful for alternative approach. Although I am still working both approaches together, results seem different (maybe following question).</p> <p>Any suggestion would be greatly appreciated.</p>
2020-07-27 14:51:18.200000+00:00
2020-07-28 11:32:36.850000+00:00
2020-07-27 16:12:54.667000+00:00
r|max|estimation|mle
['https://arxiv.org/abs/1807.03582']
1
52,162,990
<p>Cannot say anything about your own data, but the penultimate layer of Inception V3 for Grad-CAM visualization is indeed <code>mixed10</code> (idx 310), as reported in the notebook you have linked to:</p> <blockquote> <p>310 is concatenation before global average pooling</p> </blockquote> <p>Rationale: since the output of <code>conv2d_94</code> (299) is connected downstream with other convolutional layers (or concatenations of), like <code>mixed9_1</code>, <code>concatenate_2</code> etc., by definition it cannot be the penultimate <em>convolutional</em> layer; <code>mixed10</code>, on the other hand, is not - on the contrary, it is just one layer before the final average pooling one. That the penultimate layer should be a convolutional, and not a pooling one, is <strong>suggested</strong> from <a href="https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/first_edition/5.4-visualizing-what-convnets-learn.ipynb" rel="nofollow noreferrer">Chollet's exchibition</a>, where for VGG he uses <code>block5_conv3</code>, and not <code>block5_pool</code> which is immediately afterwards (although truth is, even using <code>block5_pool</code> seems to give very similar visual results).</p> <p>Let me elaborate a little, and explain the emphasis on &quot;suggested&quot; above...</p> <p>As many other things in current deep learning research &amp; practice, Grad-CAM is a <em>heuristic</em>, not a &quot;hard&quot; scientific method; as such, there are recommendations &amp; expectations on how to use it and what the results might be, but not hard rules (and &quot;appropriate&quot; layers). Consider the following excerpt from the <a href="https://arxiv.org/pdf/1611.07450.pdf" rel="nofollow noreferrer">original paper</a> (end of section 2, emphasis mine):</p> <blockquote> <p>We <strong>expect</strong> the last convolutional layers to have the best compromise between high-level semantics and detailed spatial information, so we use these feature maps to compute Grad-CAM and Guided Grad-CAM.</p> </blockquote> <p>i.e. there are indeed recommendations &amp; expectations, as I already said, but a certain experimenting &amp; free-wheeling attitude is expected...</p> <hr /> <p>Now, assuming you are following <a href="https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/first_edition/5.4-visualizing-what-convnets-learn.ipynb" rel="nofollow noreferrer">Chollet's notebook</a> on the subject (i.e. using pure Keras, and not the Keras-vis package), these are the <strong>changes</strong> in the code you need in order to make it work with Inception V3:</p> <pre class="lang-python prettyprint-override"><code># cell 24 from keras import backend as K from keras.applications.inception_v3 import InceptionV3 K.clear_session() K.set_learning_phase(0) # needs to be set BEFORE building the model model = InceptionV3(weights='imagenet') # in cell 27 from keras.applications.inception_v3 import preprocess_input, decode_predictions img = image.load_img(img_path, target_size=(299, 299)) # different size than VGG # in cell 31: last_conv_layer = model.get_layer('mixed10') for i in range(2048): # was 512 for VGG conv_layer_output_value[:, :, i] *= pooled_grads_value[i] </code></pre> <p>And the resulting superimposed heatmap on the original <code>creative_commons_elephant.jpg</code> image should look like this:</p> <p><a href="https://i.stack.imgur.com/5GNMh.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5GNMh.jpg" alt="enter image description here" /></a></p> <p>which, arguably, is not <em>that</em> different than the respective image by VGG produced in Chollet's notebook (although admittedly the heatmap is indeed more spread, and it does not seem to conform to Chollet's narrative about 'focusing on the ears')...</p>
2018-09-04 09:22:25.047000+00:00
2022-02-09 19:04:10.577000+00:00
2022-02-09 19:04:10.577000+00:00
null
52,162,467
<p>I've been trying to visualize heatmaps for Inception V3. It was my understanding the penultimate layer should be the last convolutional layer, which would be <code>conv2d_94</code> (idx 299). However, this gives very coarse maps (big regions). I tried to use another layer <code>mixed10</code> (idx 310) as suggested in <a href="https://github.com/Abhijit-2592/Keras-custom-callbacks/blob/master/how%20to%20use%20grad-cam%20in%20inceptionv3_copy.ipynb" rel="nofollow noreferrer">this notebook</a> for issue as described <a href="https://github.com/raghakot/keras-vis/issues/65" rel="nofollow noreferrer">here</a> and while the regions are smaller, it still doesn't look great. Some others do seem to use <code>conv2d_94</code>, like <a href="https://github.com/raghakot/keras-vis/issues/23" rel="nofollow noreferrer">here</a>.</p> <p>I understand it might indicate my model is simply not paying attention to the right things, but also conceptually I'm confused which layer should be used. What is an appropriate penultimate layer?</p> <p>I'm using Keras 2.2.0 with <code>visualize_cam</code> from <code>keras-vis</code>. </p> <pre><code>heatmap = visualize_cam(model, layer_idx, filter_indices=classnum, seed_input=preprocess_img, backprop_modifier=None) </code></pre> <p>Where <code>layer_idx</code> is the idx of <code>dense_2</code>.</p> <p>I've tried not defining <code>penultimate_layer</code>, which according to the <a href="https://raghakot.github.io/keras-vis/vis.visualization/#visualize_cam" rel="nofollow noreferrer">documentation</a> sets the parameter to the nearest penultimate <code>Conv</code> or <code>Pooling</code> layer. This gives the same results as <code>penultimate_layer=299</code>.</p>
2018-09-04 08:55:12.710000+00:00
2022-02-09 19:04:10.577000+00:00
2018-10-26 09:27:15.470000+00:00
machine-learning|neural-network|keras|deep-learning|conv-neural-network
['https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/first_edition/5.4-visualizing-what-convnets-learn.ipynb', 'https://arxiv.org/pdf/1611.07450.pdf', 'https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/first_edition/5.4-visualizing-what-convnets-learn.ipynb', 'https://i.stack.imgur.com/5GNMh.jpg']
4
69,423,489
<p>First of all what is pretraining? The procedure helps the model to learn syntactic &lt;==&gt; semantic (this is a spectrum) features of the language using an enormous amount of raw text (40GB) and processing power. objective function: casual language model and mask language model</p> <p>What about fine-tuning a pre-trained model? suppose there is a model which has knowledge about the general aspect of the English language (POS, dependency tree, subj ... a little of everything). fine-tuning help us to direct the focus of the model on the most important features in our dataset, let's say in your dataset some syntactic feature is the game-changer, and the model should be careful about it! objective function: based on downstream task</p> <p>Training from scratch isn't feasible for most of us, but there is an approach to continue the pre-training phase using your own corpus/corpora (task-specific) without damaging model pieces of knowledge (hopefully)! objective function: casual language model and mask language model</p> <p><a href="https://arxiv.org/pdf/1801.06146.pdf" rel="nofollow noreferrer">Here</a> is an article about this approach and its effectiveness and you can be inspired by <a href="https://github.com/allenai/scibert" rel="nofollow noreferrer">Scibert</a> and <a href="https://github.com/manueltonneau/covid-berts" rel="nofollow noreferrer">COVIDbert</a>. As you expect the use pre-trained bert as a starting point and continue pre-training using domain-specified corpus!</p>
2021-10-03 09:10:00.387000+00:00
2021-10-03 09:10:00.387000+00:00
null
null
69,423,258
<p>I am new to BERT</p> <p>I have a amazon review dataset, where I want to predict the star rating based on the review</p> <p>I know I can use a pretrained bert model as shown <a href="https://github.com/nicknochnack/BERTSentiment/blob/main/Sentiment.ipynb" rel="nofollow noreferrer">here</a></p> <p>But I want to train the bert model on my own dataset. Is that whats being done <a href="https://medium.com/analytics-vidhya/fine-tuning-bert-for-amazon-food-reviews-32e474de0e51" rel="nofollow noreferrer">here</a>? And can I apply this type of 'fine tuning' on a pretrained model with any dataset to get more accurate results or do I have to do something else to train the model from scratch</p> <p>And if I do want to train a model from scratch, where would I start</p>
2021-10-03 08:30:00.850000+00:00
2021-10-03 09:10:00.387000+00:00
null
python|tensorflow|nlp|tokenize|bert-language-model
['https://arxiv.org/pdf/1801.06146.pdf', 'https://github.com/allenai/scibert', 'https://github.com/manueltonneau/covid-berts']
3
61,839,719
<p>BERT can be viewed as a language encoder, which is trained on a humongous amount of data to learn the language well. As we know, the original BERT model was trained on the entire English Wikipedia and Book corpus, which sums to <strong>3,300M</strong> words. BERT-base has 109M model parameters. So, if you think you have large enough data to train BERT, then the answer to your question is yes. </p> <p>However, when you said "still achieve a good result", I assume you are comparing against the original BERT model. In that case, the answer lies in the size of the training data.</p> <p>I am wondering why do you prefer to train BERT from scratch instead of fine-tuning it? Is it because you are afraid of the domain adaptation issue? If not, pre-trained BERT is perhaps a better starting point.</p> <p>Please note, if you want to train BERT from scratch, you may consider a <strong>smaller</strong> architecture. You may find the following papers useful.</p> <ul> <li><a href="https://arxiv.org/abs/1908.08962" rel="noreferrer">Well-Read Students Learn Better: On the Importance of Pre-training Compact Models</a></li> <li><a href="https://arxiv.org/abs/1909.11942" rel="noreferrer">ALBERT: A Lite BERT for Self-supervised Learning of Language Representations</a></li> </ul>
2020-05-16 16:05:39.407000+00:00
2020-05-16 16:05:39.407000+00:00
null
null
61,826,824
<p>BERT pre-training of the base-model is done by a language modeling approach, where we mask certain percent of tokens in a sentence, and we make the model learn those missing mask. Then, I think in order to do downstream tasks, we add a newly initialized layer and we fine-tune the model.</p> <p>However, suppose we have a gigantic dataset for sentence classification. Theoretically, can we initialize the BERT base architecture from scratch, train both the additional downstream task specific layer + the base model weights form scratch with this sentence classification dataset only, and still achieve a good result?</p> <p>Thanks.</p>
2020-05-15 19:21:56.593000+00:00
2021-02-07 15:04:38.420000+00:00
null
nlp|pytorch|bert-language-model
['https://arxiv.org/abs/1908.08962', 'https://arxiv.org/abs/1909.11942']
2
59,679,525
<p>I think that the best choice here is to use some probability ditribution pseudo-distance, the first choice that came to my mind is to use Kullback-Leiber Divergence, it is already implemented in pytorch and keras( see [kldivloss](<a href="https://pytorch.org/docs/stable/nn.html#kldivloss" rel="nofollow noreferrer">https://pytorch.org/docs/stable/nn.html#kldivloss</a> and <a href="https://keras.io/losses/#kullback_leibler_divergence" rel="nofollow noreferrer">keras</a>) Other famous ditances may include Jesnsen-Shanon divergence and Earth-Mover distance (This the same distance thatwas used in <a href="https://arxiv.org/abs/1701.07875" rel="nofollow noreferrer">WGAN</a></p>
2020-01-10 10:07:05.617000+00:00
2020-01-10 10:07:05.617000+00:00
null
null
59,677,761
<p>I have a simple CNN with the inputs as </p> <ul> <li>Cropped grayscale patches of size MxN centered on the object of interest. The intensity of each patch is rescaled to [0, 1].</li> <li>Target Gaussian label of the same size MXN with values ranging in [5.0155e-173, 1]. This label is kept fixed throughout the training.</li> </ul> <p><a href="https://i.stack.imgur.com/fzyRs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fzyRs.png" alt="Target Gaussian label with (5.0155e-173, 1) as (min, max)"></a></p> <p>The goal is to learn the target label and use the learned model to detect the object in a test image. I am using Adam optimizer with various loss functions such as <code>categorical_crossentropy</code>, <code>mean_squared_error</code>, and <code>mean_absolute_error</code> but training halts soon probably due to the low values returned by all these loss functions (vanishing gradients?). Increasing the batch size from 1 to 16~32 sometimes helps in completing the iteration but gives undesired outcomes at test time.</p> <p>Is it because the loss function is too sensitive to the lower values in the target and even treats them as outliers hence steering the whole learning process in the wrong direction?</p> <p>I'll be grateful for your help in fixing the loss function in such a scenario.</p>
2020-01-10 08:17:09.980000+00:00
2020-01-10 10:07:05.617000+00:00
null
keras|regression|object-detection|tensorflow2.0|loss-function
['https://pytorch.org/docs/stable/nn.html#kldivloss', 'https://keras.io/losses/#kullback_leibler_divergence', 'https://arxiv.org/abs/1701.07875']
3
73,035,459
<p><strong>Evolution Strategies</strong> optimization happens on a population level. An evolution strategy algorithm in an iterative fashion (i) samples a batch of candidate solutions from the search space (ii) evaluates them and (iii) discards the ones with low fitness values. The sampling for a new iteration (or generation) happens around the mean of the best scoring candidate solutions from the previous iteration. Doing so enables evolution strategies to direct the search towards a promising location in the search space.</p> <p><strong>Reinforcement learning</strong> requires the problem to be formulated as a Markov Decision Process (MDP). An RL agent optimizes its behavior (or policy) by maximizing a cumulative reward signal received on a transition from one state to another. Since the problem is abstracted as an MDP learning can happen on a step or episode level. Learning per step (or N steps) is done via temporal-Difference learning (TD) and per episode is done via Monte Carlo methods. So far I am talking about learning via action-value functions (learning the values of actions). Another way of learning is by optimizing the parameters of a neural network representing the policy of the agent directly via gradient ascent. This approach is introduced in the REINFORCE algorithm and the general approach known as policy-based RL.</p> <p>For a comprehensive comparison check out this paper <a href="https://arxiv.org/pdf/2110.01411.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2110.01411.pdf</a></p>
2022-07-19 10:41:31.183000+00:00
2022-07-19 10:41:31.183000+00:00
null
null
53,307,599
<p>I am learning about the approach employed in Reinforcement Learning for robotics and I came across the concept of Evolutionary Strategies. But I couldn't understand how RL and ES are different. Can anyone please explain?</p>
2018-11-14 19:36:35.270000+00:00
2022-07-19 10:41:31.183000+00:00
null
deep-learning|reinforcement-learning|robotics|evolutionary-algorithm
['https://arxiv.org/pdf/2110.01411.pdf']
1
49,368,059
<p>Maybe you should try <a href="https://github.com/vi3k6i5/flashtext" rel="noreferrer">flashtext</a>.<br> According to the author, it is much more faster than Regex.<br> <a href="https://i.stack.imgur.com/nMfoJ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/nMfoJ.png" alt="enter image description here"></a></p> <p>The author even published a <a href="https://arxiv.org/abs/1711.00046" rel="noreferrer">paper</a> for this library. </p> <p>I've personally tried this library for one of my project, in my opinion its API is quite friendly and usable. </p> <p>Hope it helps.</p>
2018-03-19 16:38:58.057000+00:00
2018-03-19 16:38:58.057000+00:00
null
null
49,173,770
<p>I'm building a backend and trying to crunch the following problem.</p> <ul> <li>The clients submit text to the backend (around <code>2000</code> characters on average)</li> <li>Backend endpoint that receives the request has to apply phrase highlighting to the submitted text</li> <li><p>There is around <code>80k</code> phrases to match. A phrase is a simple object:</p> <pre><code>{ 'phrase': 'phrase to match' 'link': 'link_url' } </code></pre></li> <li><p>After finding all matches of phrases that exist in the text, the backend returns to the client what was matched - basically a map:</p> <pre><code>range in text -&gt; phrase </code></pre></li> </ul> <p>Most is done. I'm about to tackle coding the phrase matching part. Everything else works smoothly. Since I don't want to reinvent the wheel I tried googling to find a Python library that does the job of efficiently finding phrases (from huge list) in text. However, I couldn't find anything.</p> <p>I checked out the <a href="https://www.crummy.com/software/BeautifulSoup/" rel="noreferrer">BlueSoup</a> and <a href="https://www.nltk.org/" rel="noreferrer">Natural Language Toolkit</a>. However they don't seem to be doing what I'm looking for.</p> <p>Do you guys know if there is a library that would be helpful in such task? Seems like a common thing to implement and I don't want to go custom if there is a well established library for that.</p>
2018-03-08 13:02:39.477000+00:00
2019-03-19 17:08:09.500000+00:00
null
python
['https://github.com/vi3k6i5/flashtext', 'https://i.stack.imgur.com/nMfoJ.png', 'https://arxiv.org/abs/1711.00046']
3
44,683,986
<p>There is a simple approach for this. create your clusters with kmeans, then for each clusters, set some good radius with respect to center of that cluster, if some point lie out of that radius, it is an outlier.</p> <p>Try looking at this: <a href="https://arxiv.org/pdf/1402.6859.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1402.6859.pdf</a></p> <hr> <p>There is some outlier detection Technics like: <strong>OneClassSvm</strong> or <strong>AngleBaseOutlierDetection</strong> and so on. Try looking at this: <a href="http://scikit-learn.org/stable/modules/outlier_detection.html" rel="nofollow noreferrer">http://scikit-learn.org/stable/modules/outlier_detection.html</a></p>
2017-06-21 18:49:10.603000+00:00
2017-06-21 18:49:10.603000+00:00
null
null
44,653,844
<p>I am currently new to machine learning and I will be working on a project that involves using a Machine Learning library to detect and alert about possible anomalies. I will be using Apache Spark and I decided to use the KMeans method to solve the project. </p> <p>The main project consists on analyzing daily files and detecting fluctuating changes in some of the records and reporting them as possible anomalies (if they are considered one based on the model). The files are generated at the end of a day and my program needs to check them on the morning of the next day to see if there is an anomaly. However, I need to check anomalies file vs file, NOT within the file. This means that I have to compare the data of every file and see if it fits to the model I would create following the specific algorithm. What I'm trying to say is that I have some valid data that I will apply the algorithm to in order to train my model. Then I have to apply this same model to other files of the same format but, obviously, different data. I'm not looking for a prediction column but rather detecting anomalies in these other files. If there is an anomaly the program should tell me which row/column has the anomaly and then I have to program it to send an email saying that there is a possible anomaly in the specific file.</p> <p>Like I said I am new to machine learning. I want to know how I can use the KMeans algorithm to detect outliers/anomalies on a file. </p> <p>So far I have created the model: </p> <pre><code>SparkConf conf = new SparkConf().setAppName("practice").setMaster("local"); JavaSparkContext sc = new JavaSparkContext(conf); SparkSession spark = SparkSession .builder() .appName("Anomaly Detection") .getOrCreate(); String day1txt = "C:\\Users\\User\\Documents\\day1.txt"; String day2txt = "C:\\Users\\User\\Documents\\day2.txt"; Dataset&lt;Row&gt; day1 = spark.read(). option("header", "true"). option("delimiter", "\t"). option("inferSchema", "true"). csv(day1txt); day1 = day1.withColumn("Size", day1.col("Size").cast("Integer")); day1 = day1.withColumn("Records", day1.col("Records").cast("Integer")); VectorAssembler assembler = new VectorAssembler() .setInputCols(new String[]{"Size", "Records"}) .setOutputCol("features"); Dataset&lt;Row&gt; day1vector = assembler.transform(day1); KMeans kmeans = new KMeans().setK(5).setSeed(1L); KMeansModel model = kmeans.fit(day1vector); </code></pre> <p>I don't know what to do from this point on to detect outliers. I have several other .txt files that should have "normalized" data, and also I have a couple of files that have "tampered/not-normalized" data. Do I need to train my model with all the test data I have available, and if so, how can I train a model using different datasets? Or can I only train it with one dataset and test it with the others? </p> <p>EDIT:</p> <p>This is a sample of the file (day1.txt) I will be using (dummy data of course / top 10)</p> <pre><code>Name Size Records File1 1000 104370 File2 990 101200 File3 1500 109123 File4 2170 113888 File5 2000 111974 File6 1820 110666 File7 1200 106771 File8 1500 108991 File9 1000 104007 File10 1300 107037 </code></pre> <p>This is considered normal data, and I will have different files with the same format but different values around the same range. Then I have some files where I purposely added an outlier, like Size: 1000, Records: 50000. </p> <p>How can I detect that with KMeans? Or if KMeans is not the perfect model, which model should I use and how should I go around it? </p>
2017-06-20 13:02:45.543000+00:00
2018-03-23 03:29:33.470000+00:00
2017-06-20 17:29:32.800000+00:00
java|apache-spark|machine-learning|data-mining|k-means
['https://arxiv.org/pdf/1402.6859.pdf', 'http://scikit-learn.org/stable/modules/outlier_detection.html']
2
34,772,346
<p>It seems you're attempting to compute 64 features (for each 3x3 patch) in the first convolutional layer and feed this directly into the second convolutional layer, with no intermediate pooling layer. Convolutional neural networks typically have a structure of stacked convolutional layers, followed by contrast normalization and max pooling.</p> <p>To reduce processing overheads researchers have experimented in moving from fully connected to <a href="http://arxiv.org/pdf/1409.4842v1.pdf" rel="nofollow">sparsely connected architectures</a>, and hence the creation of inception architecture. However, whilst these yield good results for high dimensional inputs, you may be expecting too much from the 32x32 pixels of Cifar10 in TensorFlow.</p> <p>Therefore, I think the issue is less around patch size of and more to do with overall architecture. This <a href="https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10.py" rel="nofollow">code</a> is a known good starting point. Get this working and start reducing parameters until it breaks.</p>
2016-01-13 16:47:15.367000+00:00
2016-01-13 16:47:15.367000+00:00
null
null
34,762,505
<p>I have the necessity to keep the model as small as possible to deploy an image classifier that can run efficiently on an app (the accuracy is not really relevant for me)</p> <p>I recently approached deep learning and I haven't great experience, hence I'm currently playing with the cifar-10 example. I tried to replace the first two 5x5 convolutional layers with two 3x3 convolution each, as described in the <a href="http://arxiv.org/abs/1512.00567" rel="nofollow">inception paper</a>.</p> <p>Unluckily, when I'm going to classify the test set, I got around 0.1 correct classification (random choice)</p> <p>This is the modified code of the first layer (the second is similar):</p> <pre><code>with tf.variable_scope('conv1') as scope: kernel_l1 = _variable_with_weight_decay('weights_l1', shape=[3, 3, 3, 64], stddev=1e-4, wd=0.0) kernel_l2 = _variable_with_weight_decay('weights_l2', shape=[3, 3, 64, 1], stddev=1e-4, wd=0.0) biases = _variable_on_cpu('biases', [64], tf.constant_initializer(0.0)) conv_l1 = tf.nn.conv2d(images, kernel_l1, [1, 1, 1, 1], padding='SAME') conv_l2 = tf.nn.depthwise_conv2d(conv_l1, kernel_l2, [1, 1, 1, 1], padding='SAME') bias = tf.nn.bias_add(conv_l2, biases) conv1 = tf.nn.relu(bias, name=scope.name) _activation_summary(conv1) </code></pre> <p>Is it correct?</p>
2016-01-13 09:11:42.663000+00:00
2016-01-13 16:47:15.367000+00:00
null
deep-learning|tensorflow|conv-neural-network
['http://arxiv.org/pdf/1409.4842v1.pdf', 'https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10.py']
2
20,602,833
<p>The numbers you are describing correspond to <a href="https://en.wikipedia.org/wiki/Catalan_number#Applications_in_combinatorics" rel="nofollow">Dyck words</a>. Pt 2 of <a href="http://arxiv.org/pdf/1002.2625," rel="nofollow">Kasa 2009</a> gives a simple algorithm for enumerating them in lexicographic order. Its references should be helpful if you want to do any further reading.</p> <p>As an aside (and be warned I'm half asleep as I write this, so it might be wrong), the wikipedia article notes that the number of Dyck words of length <code>2n</code> is the <code>n</code> th Catalan number, <code>C(n)</code>. You might want to find the smallest <code>n</code> such that <code>C(n)</code> is larger than the <code>k</code> you're looking for, and then enumerate Dyck words starting from <code>X^n Y^n</code>.</p>
2013-12-16 03:09:35.220000+00:00
2013-12-16 10:49:21.873000+00:00
2013-12-16 10:49:21.873000+00:00
null
20,601,582
<p>Let's assume we will consider binary numbers which has length <code>2n</code> and <code>n</code> might be about <code>1000</code>. We are looking for <code>kth</code> number (k is limited by <code>10^9</code>) which has following properties:</p> <ul> <li>Amount of <code>1's</code> is equal to amount of <code>0's</code> what can be described as following: <code>#(1) = #(0)</code></li> <li>Every prefix of this number has to contain atleast as much <code>0's</code> as <code>1's</code>. It might be easier to understand it after negating the sentence, which is: There is no prefix which would contain more <code>1's</code> than <code>0's</code>.</li> </ul> <p>And basically that's it. So to make it clear let's do some example: <code>n=2</code>, <code>k=2</code> we have to take binary number of length <code>2n</code>:</p> <pre><code>0000 0001 0010 0011 0100 0101 0110 0111 1000 and so on... </code></pre> <p>And now we have to find <code>2nd</code> number which fulfill those two requirements. So we see <code>0011</code> is the first one, and <code>0101</code> is second one. If we change <code>k=3</code>, then answer doesn't exist since there are number which have same amount of opposite bits, but for <code>0110</code>, there is prefix <code>011</code> so number doesn't fulfill second constraint and same would be with all numbers which has <code>1</code> as most significant bit.</p> <p><strong>So what I did so far to find algorithm?</strong></p> <p>Well my first idea was to generate all possible bits settings, and check whether it has those two properties, but generate them all would take <code>O(2^(2n))</code> which is not an option for <code>n=1000</code>.</p> <p>Additionally I realize there is no need to check all numbers which are smaller than <code>0011</code> for <code>n=2</code>, <code>000111</code> for <code>n=3</code>, and so on... frankly speaking those which half of most significant bits remains "untouched" because those numbers have no possibility to fulfill <code>#(1) = #(0)</code> condition. Using that I can reduce <code>n</code> by half, but it doesn't help much. Instead of 2 * forever I have forever running algorithm. It's still <code>O(2^n)</code> complexity, which is way too big.</p> <p>Any idea for algorithm?</p> <p><strong>Conclusion</strong></p> <p>This text has been created as a result of my thoughts after reading Andy Jones post.</p> <p>First of all I wouldn't post code I have used since it's point 6 in following document from Andy's post <a href="http://arxiv.org/pdf/1002.2625," rel="nofollow">Kasa 2009</a>. All you have to do is consider <code>nr</code> as that what I described as <code>k</code>. Unranking Dyck words algorithm, would help us find out answer much faster. However it has one bottleneck.</p> <pre><code>while (k &gt;= C(n-i,j)) </code></pre> <p>Considering that <code>n &lt;= 1000</code>, Catalan number can be quite huge, even <code>C(999,999)</code>. We can use some big number arithmetic, but on the other hand I came up with little trick to overpass it and use standard integer.</p> <p>We don't want to know how big actually Catalan number is as long as it's bigger than <code>k</code>. So now we will create Catalan numbers caching partial sums in <code>n x n</code> table.</p> <pre><code>... ... 5 | 42 ... 4 | 14 42 ... 3 | 5 14 28 ... 2 | 2 5 9 14 ... 1 | 1 2 3 4 5 ... 0 | 1 1 1 1 1 1 ... ---------------------------------- ... 0 1 2 3 4 5 ... </code></pre> <p>To generate it is quite trivial:</p> <pre><code>C(x,0) = 1 C(x,y) = C(x,y-1) + C(x-1,y) where y &gt; 0 &amp;&amp; y &lt; x C(x,y) = C(x,y-1) where x == y </code></pre> <p>So what we can see only this:</p> <pre><code>C(x,y) = C(x,y-1) + C(x-1,y) where y &gt; 0 &amp;&amp; y &lt; x </code></pre> <p>can cause overflow.</p> <p><strong>Let's stop at this point and provide definition.</strong></p> <p><code>k-flow</code> - it's not real overflow of integer but rather information that value of <code>C(x,y)</code> is bigger than <code>k</code>.</p> <p>My idea is to check after each running of above formula whether <code>C(x,y)</code> is grater than <code>k</code> or any of sum components is <code>-1</code>. If it is we put <code>-1</code> instead, which would act as a marker, that <code>k-flow</code> has happened. I guess it quite obvious that if <code>k-flow</code> number is sum up with any positive number it's still be <code>k-flowed</code> in particular sum of 2 <code>k-flowed</code> numbers is <code>k-flowed</code>.</p> <p>The last what we have to prove is that there is no possibility to create real overflow. Real overflow might only happen if we sum up <code>a + b</code> which non of them is <code>k-flowed</code> but as sum they generated the real overflow.</p> <p>Of course it's impossible since maximum value can be described as <code>a + b &lt;= 2 * k &lt;= 2*10^9 &lt;= 2,147,483,647</code> where last value in this inequality is value of int with sign. I assume also that int has 32 bits, as in my case.</p>
2013-12-16 00:13:52.137000+00:00
2017-07-23 22:01:22.760000+00:00
2017-02-01 12:41:49.937000+00:00
algorithm|binary|sequence|catalan
['https://en.wikipedia.org/wiki/Catalan_number#Applications_in_combinatorics', 'http://arxiv.org/pdf/1002.2625,']
2
46,661,742
<p>I will assume that you have already double, triple and quadruple checked that the data going in is matching what you expect.</p> <hr> <p>The question is quite open-ended, and even a topic for research. But there are some things that can help.</p> <p>In terms of better training, there's two normal ways in which people train neural networks with an unbalanced dataset.</p> <ul> <li>Oversample the examples with lower frequency, such that the proportion of examples for each class that the network sees is equal. e.g. in every batch, enforce that 1/4 of the examples are from class 1, 1/4 from class 2, etc.</li> <li>Weight the error for misclassifying each class by it's proportion. e.g. incorrectly classifying an example of class 1 is worth 100/43, while incorrectly classifying an example of class 4 is worth 100/7</li> </ul> <p>That being said, if your learning rate is good, neural networks will often eventually (after many hours of just sitting there) jump out of only predicting for one class, but they still rarely end well with a badly skewed dataset.</p> <hr> <p>If you want to know whether or not there <em>are</em> patterns in your data which can be determined, there is a simple way to do that. </p> <p>Create a new dataset by randomly select elements from all of your classes such that you have an even number of all of them (i.e. if there's 700 examples of class 4, then construct a dataset by randomly selecting 700 examples from every class)</p> <p>Then you can use all of your techniques on this new dataset.</p> <p>Although, <a href="https://arxiv.org/abs/1611.03530" rel="nofollow noreferrer">this paper</a> suggests that even with random labels, it should be able to find some pattern that it understands.</p>
2017-10-10 08:15:16.777000+00:00
2017-10-10 08:15:16.777000+00:00
null
null
46,661,373
<p>I am experimenting with classification using neural networks (I am using tensorflow). And unfortunately the training of my neural network gets stuck at 42% accuracy. I have 4 classes, into which I try to classify the data. And unfortunately, my data set is not well balanced, meaning that:</p> <ol> <li>43% of the data belongs to class 1 (and yes, my network gets stuck predicting only this)</li> <li>37% to class 2</li> <li>13% to class 3</li> <li>7% to class 4</li> </ol> <p>The optimizer I am using is AdamOptimizer and the cost function is tf.nn.softmax_cross_entropy_with_logits.</p> <p>I was wondering if the reason for my training getting stuck at 42% is really the fact that my data set is not well balanced, or because the nature of the data is really random, and there are really no patterns to be found.</p> <p>Currently my NN consists of:</p> <ol> <li>input layer </li> <li>2 convolution layers </li> <li>7 fully connected layers</li> <li>output layer</li> </ol> <p>I tried changing this structure of the network, but the result is always the same. I also tried Support Vector Classification, and the result is pretty much the same, with small variations.</p> <p>Did somebody else encounter similar problems? Could anybody please provide me some hints how to get out of this issue?</p> <p>Thanks, Gerald</p>
2017-10-10 07:53:49.190000+00:00
2018-04-17 00:34:47.357000+00:00
2017-10-10 07:58:23.533000+00:00
machine-learning|tensorflow|neural-network|deep-learning|conv-neural-network
['https://arxiv.org/abs/1611.03530']
1
62,841,087
<p>Depending on what you are trying to model, it may or may not be correct to do so.</p> <p>Training on an imbalanced dataset will generally make your model overfit those elements that appear more often, which leads to bias towards those ones at best or no understanding of the underrepresented samples at worst. If you are trying to model the natural occurrences of some information, then an unbalanced dataset in essence has a prior probability applied to it already, so the resulting bias may be desired. In these cases, the number of elements per class, say, <em>is</em> part of the actual information. Such a bias can be (un-)modeled artificially too, however, e.g. by applying a scaling factor for classification (e.g. through class weights), etc. To avoid such bias, boosting and ensemble methods such as Xgboost (or Adaboost in more trivial cases) or just Random Forests work relatively well. If you have the time, k-fold cross validation can help reducing the error further.</p> <p>To make sure every sample is adequately represented, you may choose to oversample the underrepresented classes or undersample the overrepresented ones. In order to determine correct likelihoods, make sure to capture the prior distribution as well and use it to shape the posterior. Data augmentation may help you if the number of samples is low; depending on your case, synthetic data generation might be a good approach. You could try, say, training a GAN only on the underrepresented samples and use that to generate more - as in idea: train it on all available data first, then change the discriminator loss to force it to forge and recognize the underrepresented classes only. Without entering the Deep Learning domain, techniques such as <a href="https://arxiv.org/abs/1106.1813" rel="nofollow noreferrer">SMOTE</a> or ADASYN may work. Both are available in the <a href="https://github.com/scikit-learn-contrib/imbalanced-learn/" rel="nofollow noreferrer"><code>imblearn</code></a> Python package which builds on scikit-learn.</p> <p>Lastly, carefully selecting the loss metric may help. You can find more (and more detailed) information in papers such as <a href="https://link.springer.com/article/10.1186/s40537-019-0192-5" rel="nofollow noreferrer">Survey on deep learning with class imbalance</a>.</p>
2020-07-10 19:26:59.003000+00:00
2020-07-22 18:59:36.160000+00:00
2020-07-22 18:59:36.160000+00:00
null
62,832,445
<p><strong>Background -</strong> The dataset I am working on is highly imbalanced and the number of classes is 543. The data is bounded by date. After exploring the data over a span of 5 years I came to know the imbalance is inherent and its persistent. The test data which the model will get will also be bounded by a date range and it will also have a similar imbalance.</p> <p>The reason for the imbalance in the data is different amount of spend, popularity of a product. Handling imbalance would do injustice to the business.</p> <p><strong>Questions -</strong> In such a case, is it okay to proceed with building model on imbalanced data?</p> <p>The model would be retrained every month on the new data and it would be used for predictions once in a month.</p>
2020-07-10 10:33:15.200000+00:00
2020-07-22 18:59:36.160000+00:00
2020-07-10 16:02:46.377000+00:00
machine-learning|scikit-learn|imbalanced-data
['https://arxiv.org/abs/1106.1813', 'https://github.com/scikit-learn-contrib/imbalanced-learn/', 'https://link.springer.com/article/10.1186/s40537-019-0192-5']
3
62,776,500
<p>A lot of it depends on what you're doing. Generally, you'll want to try an autoencoder or a transformer and do unsupervised/semisupervised learning.</p> <p>Here's some material that might give some insight into some methods.<br /> <a href="https://arxiv.org/abs/2006.07733" rel="nofollow noreferrer">https://arxiv.org/abs/2006.07733</a> (Bootstrap your own latent -- Deepmind)<br /> <a href="https://arxiv.org/abs/1610.02242" rel="nofollow noreferrer">https://arxiv.org/abs/1610.02242</a> (Temporal ensembling for semisupervised learning -- Timo Aina)<br /> <a href="https://www.coursera.org/lecture/intro-to-deep-learning/autoencoders-101-QqBOa" rel="nofollow noreferrer">https://www.coursera.org/lecture/intro-to-deep-learning/autoencoders-101-QqBOa</a> (Autoencoder -- Andrew Ng)</p>
2020-07-07 13:33:03.977000+00:00
2020-07-07 13:33:03.977000+00:00
null
null
62,776,295
<p>I have a time series dataset and I want to extract its features using BRNN or CNN -RNN ( python programming language)</p> <p>First, I train the model with the classification layer and obtain best accuracy.</p> <p>Then, I want to take the features from one of model's hidden layer.</p> <p>But I have 41 class how can I extract the features of all these classes and how I know the extracted features belong to which class?</p> <p>Because I want each class with its extracted features to make some calculation later.</p>
2020-07-07 13:22:23.597000+00:00
2020-07-07 13:33:03.977000+00:00
2020-07-07 13:23:55.513000+00:00
python|feature-extraction
['https://arxiv.org/abs/2006.07733', 'https://arxiv.org/abs/1610.02242', 'https://www.coursera.org/lecture/intro-to-deep-learning/autoencoders-101-QqBOa']
3
45,950,050
<p>I guess you are asking this question because the problem you are trying to solve may have some memory requirements or you want to perform some changes in Pattern class. </p> <ol> <li><p>In case you have memory issues, you can go ahead and use Esper or Siddhi CEP engines as they don't have dependencies like Flink has.</p></li> <li><p>For the second case i.e you want to make some changes to pattern file or see how query processing works then you should use NFA class as @Dawid pointed out. This is because CEP pattern matching works basically by parsing query tree as Non-deterministic Finite Automata which receives input streams at Leaf nodes and use operators at each level to co-relate values of streams and filter values, send it to the upper level operator for filtering and so on. Final values are received at the root of this tree.</p></li> </ol> <p>Particularly I found SASE engine helpful if you are going to interact with low level and basic CEP engine. link for SASE paper is <a href="https://arxiv.org/ftp/cs/papers/0612/0612128.pdf" rel="nofollow noreferrer">https://arxiv.org/ftp/cs/papers/0612/0612128.pdf</a></p> <p>You can look code of NFA file to get more understanding <a href="https://github.com/haopeng/sase/tree/master/src/edu/umass/cs/sase/query" rel="nofollow noreferrer">https://github.com/haopeng/sase/tree/master/src/edu/umass/cs/sase/query</a></p> <blockquote> <p>Please let me know if you have some query</p> </blockquote>
2017-08-30 00:12:09.737000+00:00
2017-08-30 00:12:09.737000+00:00
null
null
45,879,034
<p>As in the title: Is it possible to use just flink pattern matching without whole other flink enviroment?</p>
2017-08-25 10:04:14.600000+00:00
2017-08-30 00:12:09.737000+00:00
2017-08-25 11:29:29.497000+00:00
java|apache-flink|flink-cep
['https://arxiv.org/ftp/cs/papers/0612/0612128.pdf', 'https://github.com/haopeng/sase/tree/master/src/edu/umass/cs/sase/query']
2
44,077,525
<p>Some of the famous initializers for Convolutional Neural Networks:</p> <p><strong>Glorot Normal</strong>: Also called Xavier. Normal distribution centered on 0 with stddev = sqrt(2 / (fan_in + fan_out)) where fan_in is the number of input units in the weight tensor and fan_out is the number of output units in the weight tensor.</p> <p><a href="http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf" rel="nofollow noreferrer">http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf</a></p> <p><strong>Lecun Uniform</strong>: Uniform distribution within [-limit, limit] where limit is sqrt(3 / fan_in) where fan_in is the number of input units in the weight tensor.</p> <p><a href="http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf" rel="nofollow noreferrer">http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf</a></p> <p><strong>He Normal</strong>: Truncated normal distribution centered on 0 with stddev = sqrt(2 / fan_in) where fan_in is the number of input units in the weight tensor.</p> <p><a href="http://arxiv.org/abs/1502.01852" rel="nofollow noreferrer">http://arxiv.org/abs/1502.01852</a></p> <p>Along with these initializers, one have to search for learning rate, momentum and other hyperparameters.</p>
2017-05-19 19:17:07.073000+00:00
2017-05-19 19:17:07.073000+00:00
null
null
44,076,945
<p>How do people typically choose initial values for their variables and parameters? Do we just tinker till it works?</p> <p>I was following the Getting Started tutorial for tensorflow, and was able to train the linear model in it. However, I noticed that the starting values for the variables W, b were reasonably close to the ground truth.</p> <p>When I change the data to make the ground truth values much further away, the gradient descent optimizer gives me NaN values for W, b.</p> <p>However, in general, I don't think it would be reasonable to be able to guess the initial values of the variables in the model. Seems like I should be able to choose any arbitrary starting point and get to where I want.</p> <p>I was thinking my choice in my parameters might be bad. However, I am not sure in what way to adjust this. The default was 0.01, I've tried values from 0.001 to 100.</p> <p>Would there be a discussion of optimization parameter choices and initial values for model variables in a general machine learning book? Really I am just looking for resources.</p> <p>Thanks!</p>
2017-05-19 18:41:23.590000+00:00
2017-05-19 19:17:07.073000+00:00
null
tensorflow
['http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf', 'http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf', 'http://arxiv.org/abs/1502.01852']
3
43,056,628
<p>It is not trivial and requires special architecture. You can read the description of it in a publications of <a href="https://deepmind.com/blog/wavenet-generative-model-raw-audio/" rel="nofollow noreferrer">DeepMind</a> and <a href="https://arxiv.org/abs/1702.07825" rel="nofollow noreferrer">Baidu</a>.</p> <p>You might also want to study <a href="https://github.com/ibab/tensorflow-wavenet" rel="nofollow noreferrer">existing implementation of wavenet training</a>.</p> <p>Overall, pure end-to-end speech synthesis is still not working. If you are serious about text-to-speech it is better to study conventional systems like <a href="https://github.com/CSTR-Edinburgh/merlin" rel="nofollow noreferrer">merlin</a>. </p>
2017-03-27 21:15:35.343000+00:00
2017-03-27 21:15:35.343000+00:00
null
null
43,053,969
<p>I am creating a Text to Speech system for a phonetic language called "Kannada" and I plan to train it with a Neural Network. The input is a word/phrase while the output is the corresponding audio.</p> <p>While implementing the Network, I was thinking the input should be the segmented characters of the word/phrase as the output pronunciation only depends on the characters that make up the word, unlike English where we have slient words and Part of Speech to consider. However, I do not know how I should train the output. </p> <p>Since my Dataset is a collection of words/phrases and the corrusponding MP3 files, I thought of converting these files to WAV using pydub for all audio files.</p> <pre><code>from pydub import AudioSegment sound = AudioSegment.from_mp3("audio/file1.mp3") sound.export("wav/file1.wav", format="wav") </code></pre> <p>Next, I open the wav file and convert it to a normalized byte array with values between 0 and 1. </p> <pre><code>import numpy as np import wave f = wave.open('wav/kn3.wav', 'rb') frames = f.readframes(-1) #Array of integers of range [0,255] data = np.fromstring(frames, dtype='uint8') #Normalized bytes of wav arr = np.array(data)/255 </code></pre> <p><strong>How Should I train this?</strong></p> <p>From here, I am not sure how to train this with the input text. From this, I would need a variable number of input and output neurons in the First and Last layers as the number of characters (1st layer) and the bytes of the corresponding wave (Last layer) change for every input. </p> <p>Since RNNs deal with such variable data, I thought it would come in handy here. </p> <p>Correct me if I am wrong, but the output of Neural Networks are actually probability values between 0 and 1. However, we are not dealing with a classification problem. The audio can be anything, right? In my case, the "output" should be a vector of bytes corrusponding to the WAV file. So there will be around 40,000 of these with values between 0 and 255 (without the normalization step) for every word. How do I train this speech data? Any suggestions are appreciated.</p> <p><strong>EDIT 1</strong> : In response to <strong>Aaron's</strong> comment</p> <p>From what I understand, Phonemes are the basic sounds of the language. So, why do I need a neural network to map phoneme labels with speech? Can't I just say, "whenever you see this alphabet, pronounce it like <em>this</em>". After all, this language, Kannada, is phonetic: There are no silent words. All words are pronounced the same way they are spelled. How would a Neural Network help here then?</p> <p>On input of a new text, I just need to break it down to the corresponding alphabets (which are also the phonemes) and retrieve it's file (converted from WAV to raw byte data). Now, merge the bytes together and convert it to a wav file.</p> <p>Is this this too simplistic? Am I missing something here? What would be the point of a Neural Network for this particular language (Kannada) ?</p>
2017-03-27 18:34:01.387000+00:00
2017-03-28 13:05:04.430000+00:00
2017-03-28 13:05:04.430000+00:00
python|neural-network|speech-recognition|text-to-speech
['https://deepmind.com/blog/wavenet-generative-model-raw-audio/', 'https://arxiv.org/abs/1702.07825', 'https://github.com/ibab/tensorflow-wavenet', 'https://github.com/CSTR-Edinburgh/merlin']
4
49,932,860
<p>There has been some research on "one-class classification". Here are a couple of papers:</p> <ul> <li><a href="http://homepage.tudelft.nl/n9d04/thesis.pdf" rel="nofollow noreferrer">One-class classification</a> by David Martinus Johannes</li> <li><a href="https://arxiv.org/abs/1801.05365" rel="nofollow noreferrer">Learning Deep Features for One-Class Classification</a> by Pramuditha Perera, Vishal M. Patel The code implementation is available here: <a href="https://github.com/PramuPerera/DeepOneClass" rel="nofollow noreferrer">https://github.com/PramuPerera/DeepOneClass</a></li> </ul> <p>If your data is in the form of images, you could try using Generative Adversarial Networks (GANs) to generate negative data. There is a post on this problem here: <a href="https://www.quora.com/Could-I-use-GANs-to-generate-negative-samples-for-one-class-classification" rel="nofollow noreferrer">Could I use GANs to generate negative samples for one class classification?</a> He references Johannes' thesis.</p> <p>If you program in Python check out what SciKit-Learn has to offer:</p> <ul> <li><a href="http://scikit-learn.org/stable/modules/generated/sklearn.svm.OneClassSVM.html#sklearn.svm.OneClassSVM" rel="nofollow noreferrer">sklearn.svm.OneClassSVM</a></li> <li>Example: <a href="http://scikit-learn.org/stable/auto_examples/svm/plot_oneclass.html#sphx-glr-auto-examples-svm-plot-oneclass-py" rel="nofollow noreferrer">One-class SVM with non-linear kernel (RBF)</a></li> </ul>
2018-04-20 02:11:29.863000+00:00
2018-04-20 02:19:56.890000+00:00
2018-04-20 02:19:56.890000+00:00
null
49,930,228
<p>My company makes widgets. We make very high quality widgets, but occasionally a widget will suffer a defect known as a 'glurb'. A widget might never glurb over its entire lifetime, it may glurb once, or it may glurb multiple times. A widget's lifetime may be a few months or many years. </p> <p>We maintain a database that lists every instance of a widget glurbing. For each glurb event, we know which widget glurbed, when it glurbed, and we have features about the widget before it glurbed. We know for 100% certain that when a widget glurbs, it is recorded in our database. </p> <p>Management wants to build a machine learning model that, given a particular widget, will predict whether or not it will glurb in, say, the next six months.</p> <p>I have a problem: I have a set of observations that show when a widget glurbs, which is the 'positive' training set, but I have no 'negative' (did not glurb) training set.</p> <p>Is it statistically valid for me to choose a time, date, and widget at random, look into my database, and if I see that widget didn't glurb for 6 months after the chosen date/time, to declare that as an instance of a 'didn't glurb' event and put that in my 'negative' training set sample?</p> <p>Is there a statistically valid way to generate a 'negative' test set from the data I have? If so, what would it be? If not, how could I build a classifier from the data I have?</p>
2018-04-19 21:04:13.340000+00:00
2018-04-20 05:48:00.497000+00:00
2018-04-20 05:48:00.497000+00:00
machine-learning|classification
['http://homepage.tudelft.nl/n9d04/thesis.pdf', 'https://arxiv.org/abs/1801.05365', 'https://github.com/PramuPerera/DeepOneClass', 'https://www.quora.com/Could-I-use-GANs-to-generate-negative-samples-for-one-class-classification', 'http://scikit-learn.org/stable/modules/generated/sklearn.svm.OneClassSVM.html#sklearn.svm.OneClassSVM', 'http://scikit-learn.org/stable/auto_examples/svm/plot_oneclass.html#sphx-glr-auto-examples-svm-plot-oneclass-py']
6
45,481,564
<p>If you need a strong communication between your Python and OCaml code, there should indeed be two separate "master" processes (think of them as network nodes).</p> <p>As @Sven Marnach already mentioned, a good option to implement this is to link these two processes via JSON-based protocol.</p> <p>A more convenient approach would be to use Google's gRPC framework (<a href="https://grpc.io/" rel="nofollow noreferrer">https://grpc.io/</a>) and communicate via Protobuf messages (<a href="https://developers.google.com/protocol-buffers/" rel="nofollow noreferrer">https://developers.google.com/protocol-buffers/</a>). The framework is very handly. Unfortunately, there is no support for OCaml yet, but I think you can thin wrap your OCaml <code>main</code> into thin Python layer, or translate it to JS. Then, all you need to do is just to connect your functions to gRPC interfaces.</p> <p>Here is how the system would look like:</p> <pre><code>+----------+ +------+ +---Thin Python wrapper / JS wrapper---+ | Your | | | | +--------------------------------+ | | Python |&lt;-&gt;| gRPC |&lt;-&gt;| | Your OCaml app | | | app | | | | +--------------------------------+ | +----------+ +------+ +--------------------------------------+ </code></pre> <p><strong>P.S.</strong> I'm using the same approach in a problem similar to yours (but GUI is in Java). I'd say that it is very convenient, fast to develop, and easily extendable.</p> <p><strong>P.P.S.</strong> You are not alone in this :). Here is an interesting excerption from a paper by a (former?) Google employee (<a href="https://arxiv.org/abs/1702.01715" rel="nofollow noreferrer">https://arxiv.org/abs/1702.01715</a>):</p> <blockquote> <p>Software engineers at Google are strongly encouraged to program in one of five officially-approved programming languages at Google: C++, Java, Python, Go, or JavaScript​.</p> <p>Interoperation between these different programming languages is done mainly using Protocol Buffers​. Protocol Buffers is a way of encoding structured data in an efficient yet extensible way. It includes a domain-specific language for specifying structured data, together with a compiler that takes in such descriptions and generates code in C++, Java, Python, for constructing, accessing, serializing, and deserializing these objects. Google’s version of Protocol Buffers is integrated with Google’s RPC libraries, enabling simple cross-language RPCs, with serialization and deserialization of requests and responses handled automatically by the RPC framework.</p> </blockquote>
2017-08-03 10:27:34.253000+00:00
2017-08-03 10:27:34.253000+00:00
null
null
45,479,510
<p>I would like a python GUI to have an OCaml process in the background. I would like to keep a single session throughout the program's lifetime, and depending on user inputs, call some OCaml commands and retrieve OCaml's output. Some OCaml variables and structures may be defined along the way so I would like to maintain a single ongoing session.</p> <p>My solution was to hold an OCaml toplevel process using popen and interact with its stdin and stdout. This works purely for me for several reasons: 1. I don't know when is the OCaml calculation done and can't tell if it's output is complete or there is more to come (especially so if the evaluation takes some time, and if multiple OCaml commands were invoked). 2. I have no inherent way of telling whether the OCaml command ran smoothly or maybe there were OCaml warnings or errors. 3. I lose the structure of OCaml's output. For example, if the output spreads over several lines, I can't tell which lines were broken due to line size, and which were originally separate lines. </p> <p>I know there are some discussions and some packages for combining python with OCaml, but they all run python commands from OCaml, and I need the opposite. </p>
2017-08-03 08:59:07.887000+00:00
2017-08-03 23:10:54.993000+00:00
null
python|ocaml|integration
['https://grpc.io/', 'https://developers.google.com/protocol-buffers/', 'https://arxiv.org/abs/1702.01715']
3
42,579,491
<p>indraforyou answered the question in how to solve the problem you are having. I want to add something for the inception model specifically. In <a href="https://arxiv.org/pdf/1312.6229.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1312.6229.pdf</a> they propose a regressor network trained on the output of a model trained on the imagenet dataset like the inception model. This regressor model then is used to propose object boundaries for you to use for counting. The advantage of this approach is that you do not have to annotate any training examples and you can just use the ImageNet dataset for training. </p> <p>If you do not want to train anything I would propose a heuristic in finding object boundaries. Literature in image segmentation <a href="https://en.wikipedia.org/wiki/Image_segmentation" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Image_segmentation</a> should help you find a suitable heuristic. I do think using a heuristic will decrease your accuracy though.</p> <p>Last but not least this is an open problem in computer vision research. You should not expect to get 100% accuracy or even 95% accuracy on counting. Many very smart people have tried this and reported mixed results. Still some very cool things can be accomplished.</p>
2017-03-03 12:57:04.620000+00:00
2017-03-03 12:57:04.620000+00:00
null
null
41,247,221
<p>I have already gone through the image classification part in Inception model, but I require to count the objects in the image. </p> <p>Considering the flowers data-set, one image can have multiple instances of a flower, so how can I get that count?</p>
2016-12-20 16:42:04.603000+00:00
2017-03-04 04:05:19.510000+00:00
2016-12-20 17:33:55.010000+00:00
image-processing|tensorflow|deep-learning
['https://arxiv.org/pdf/1312.6229.pdf', 'https://en.wikipedia.org/wiki/Image_segmentation']
2
42,479,298
<p>What you describe is known to research community as <strong>Instance-Level Segmentation</strong>. </p> <p>In last year itself there have been a significant spike in papers addressing this problem.</p> <p>Here are some of the papers:</p> <ul> <li><a href="https://arxiv.org/pdf/1412.7144v4.pdf" rel="noreferrer">https://arxiv.org/pdf/1412.7144v4.pdf</a></li> <li><a href="https://arxiv.org/pdf/1511.08498v3.pdf" rel="noreferrer">https://arxiv.org/pdf/1511.08498v3.pdf</a></li> <li><a href="https://arxiv.org/pdf/1607.03222v2.pdf" rel="noreferrer">https://arxiv.org/pdf/1607.03222v2.pdf</a></li> <li><a href="https://arxiv.org/pdf/1607.04889v2.pdf" rel="noreferrer">https://arxiv.org/pdf/1607.04889v2.pdf</a></li> <li><a href="https://arxiv.org/pdf/1511.08250v3.pdf" rel="noreferrer">https://arxiv.org/pdf/1511.08250v3.pdf</a></li> <li><a href="https://arxiv.org/pdf/1611.07709v1.pdf" rel="noreferrer">https://arxiv.org/pdf/1611.07709v1.pdf</a></li> <li><a href="https://arxiv.org/pdf/1603.07485v2.pdf" rel="noreferrer">https://arxiv.org/pdf/1603.07485v2.pdf</a></li> <li><a href="https://arxiv.org/pdf/1611.08303v1.pdf" rel="noreferrer">https://arxiv.org/pdf/1611.08303v1.pdf</a></li> <li><a href="https://arxiv.org/pdf/1611.08991v2.pdf" rel="noreferrer">https://arxiv.org/pdf/1611.08991v2.pdf</a></li> <li><a href="https://arxiv.org/pdf/1611.06661v2.pdf" rel="noreferrer">https://arxiv.org/pdf/1611.06661v2.pdf</a></li> <li><a href="https://arxiv.org/pdf/1612.03129v1.pdf" rel="noreferrer">https://arxiv.org/pdf/1612.03129v1.pdf</a></li> <li><a href="https://arxiv.org/pdf/1605.09410v4.pdf" rel="noreferrer">https://arxiv.org/pdf/1605.09410v4.pdf</a></li> </ul> <p>As you see in these papers simple object classification network won't solve the problem. </p> <p>If you search github you will find a few repositories with basic frameworks, you can build on top of them.</p> <ul> <li><a href="https://github.com/daijifeng001/MNC" rel="noreferrer">https://github.com/daijifeng001/MNC</a> (caffe)</li> <li><a href="https://github.com/bernard24/RIS/blob/master/RIS_infer.ipynb" rel="noreferrer">https://github.com/bernard24/RIS/blob/master/RIS_infer.ipynb</a> (torch)</li> <li><a href="https://github.com/jr0th/segmentation" rel="noreferrer">https://github.com/jr0th/segmentation</a> (keras, tensorflow)</li> </ul>
2017-02-27 06:39:51.803000+00:00
2017-02-27 06:39:51.803000+00:00
null
null
41,247,221
<p>I have already gone through the image classification part in Inception model, but I require to count the objects in the image. </p> <p>Considering the flowers data-set, one image can have multiple instances of a flower, so how can I get that count?</p>
2016-12-20 16:42:04.603000+00:00
2017-03-04 04:05:19.510000+00:00
2016-12-20 17:33:55.010000+00:00
image-processing|tensorflow|deep-learning
['https://arxiv.org/pdf/1412.7144v4.pdf', 'https://arxiv.org/pdf/1511.08498v3.pdf', 'https://arxiv.org/pdf/1607.03222v2.pdf', 'https://arxiv.org/pdf/1607.04889v2.pdf', 'https://arxiv.org/pdf/1511.08250v3.pdf', 'https://arxiv.org/pdf/1611.07709v1.pdf', 'https://arxiv.org/pdf/1603.07485v2.pdf', 'https://arxiv.org/pdf/1611.08303v1.pdf', 'https://arxiv.org/pdf/1611.08991v2.pdf', 'https://arxiv.org/pdf/1611.06661v2.pdf', 'https://arxiv.org/pdf/1612.03129v1.pdf', 'https://arxiv.org/pdf/1605.09410v4.pdf', 'https://github.com/daijifeng001/MNC', 'https://github.com/bernard24/RIS/blob/master/RIS_infer.ipynb', 'https://github.com/jr0th/segmentation']
15
56,864,153
<p>The angles(thetas) are passed through the sin() and cos() function so that the observations are in the range [-1,1]. This fixed range of [-1,1] helps in stabilising the training in the neural networks which has been explained well <a href="https://stackoverflow.com/questions/4674623/why-do-we-have-to-normalize-the-input-for-an-artificial-neural-network">here</a>.</p> <p>You could even use one of the sin() or cos() as your observation. The reason(which I can think of) for using both sin() and cos() is probably to give more information about the state. Maybe using both sin() and cos() leads to a faster convergence.</p> <p>But normalisation of the inputs is necessary. So, you cannot just use the angles as your state observations for training.</p> <p>Edit: Answer to the comment by @CHEN TIANRONG <a href="https://i.stack.imgur.com/EDpGO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EDpGO.png" alt="Plots "></a></p> <p>I ran DDPG with just sin() and theta_dot in one experiment and with sin(), cos() and theta_dot in another experiment. Clearly the agent never learns the task in the first experiment.</p> <p>The usage of both sin() and cos() is experimental I guess.</p> <p>You can find the code I used for the experiments <a href="https://github.com/nsidn98/Stack-Overflow/tree/master/Pendulum_states" rel="nofollow noreferrer">here</a>.</p> <p>Improving the rate of convergence of a neural network for RL agents is an active area of research. You could search for algorithms which are sample efficient. For example: <a href="https://arxiv.org/abs/1807.01675" rel="nofollow noreferrer">Sample-Efficient Reinforcement Learning with Stochastic Ensemble Value Expansion</a>, <a href="https://arxiv.org/abs/1611.01224" rel="nofollow noreferrer">Sample Efficient Actor-Critic with Experience Replay</a>, etc.</p>
2019-07-03 06:29:23.117000+00:00
2019-07-05 08:35:45.557000+00:00
2019-07-05 08:35:45.557000+00:00
null
56,843,905
<p>Why the pendulum has cos and sin feature? Can I just use 1 of them? Or can I use theta (the angle) instead? </p> <p>I expect some explanation for this XD, intuitive or theoretical ones are all welcome.</p>
2019-07-02 00:26:24.023000+00:00
2019-07-05 08:35:45.557000+00:00
null
python|openai-gym|pendulum
['https://stackoverflow.com/questions/4674623/why-do-we-have-to-normalize-the-input-for-an-artificial-neural-network', 'https://i.stack.imgur.com/EDpGO.png', 'https://github.com/nsidn98/Stack-Overflow/tree/master/Pendulum_states', 'https://arxiv.org/abs/1807.01675', 'https://arxiv.org/abs/1611.01224']
5
33,315,622
<p>If your input is less than 2^64 (more than enough for your example using an int) there are some good methods:</p> <p>1) <a href="https://en.wikipedia.org/wiki/Baillie%E2%80%93PSW_primality_test" rel="nofollow">BPSW</a>. Fast, deterministic, correct for all 64-bit inputs, no known counterexamples above this (though we believe they exist)</p> <p>2) <a href="https://en.wikipedia.org/wiki/Miller%E2%80%93Rabin_primality_test#Deterministic_variants_of_the_test" rel="nofollow">Deterministic Miller-Rabin</a>. The Wikipedia page gives some correct but inefficient base sets -- the ones at <a href="https://miller-rabin.appspot.com/" rel="nofollow">Best Known SPRP Bases</a> are the best known for 64-bit inputs. At most 7 tests for any 64-bit input. <a href="http://arxiv.org/abs/1509.00864" rel="nofollow">New (Sep 2015) results</a> give deterministic results for 81-bit input with 13 tests.</p> <p>3) Hashed deterministic M-R. This is just an optimization of #2. Only a single M-R test needed for 32-bit inputs, 2 or 3 for 64-bit inputs. See <a href="http://ceur-ws.org/Vol-1326/020-Forisek.pdf" rel="nofollow">Forisek and Jancina 2015 paper</a> and <a href="https://github.com/danaj/Math-Prime-Util/blob/master/primality.c" rel="nofollow">my different hashed implementation</a>.</p> <p><a href="https://en.wikipedia.org/wiki/Trial_division" rel="nofollow">Trial division</a>, the method you're showing, is quite good for tiny inputs, say under a million or so. It is still computationally ok for a while past that, but it is exponential time in the bit length of the input. It slows down very rapidly, and really isn't usable past 26 or so digits (just because of the huge time growth). In my test, at 25 digits it is 400M times slower than BPSW (a PRP test at this size), 13M times slower than ECPP, 3M times slower than APR-CL.</p> <hr> <p><a href="http://probableprime.org/images/primality-times.png" rel="nofollow">Graph of run times for primality tests on large inputs</a></p> <hr> <p>If your input is larger than 64-bit, some options include:</p> <ul> <li><p>BLS75 methods (from <a href="http://www.ams.org/journals/mcom/1975-29-130/S0025-5718-1975-0384673-1/" rel="nofollow">the seminal 1975 paper</a>), including N-1, N+1, and hybrid methods based on partial factoring. These are still used, and are surprisingly fast for numbers up to ~40 digits. <a href="https://en.wikipedia.org/wiki/Pocklington_primality_test" rel="nofollow">Generalized Pocklington</a> is a special case of one of the theorems. Since this relies on partial factoring of n-1 and/or n+1, it doesn't scale well in general and fizzles out around 80-100 digits for practical use.</p></li> <li><p><a href="https://en.wikipedia.org/wiki/Adleman%E2%80%93Pomerance%E2%80%93Rumely_primality_test" rel="nofollow">APR-CL</a>. Quite fast (e.g. half a second for a 200 digit number). Open source in <a href="http://pari.math.u-bordeaux.fr/" rel="nofollow">Pari/GP</a> and <a href="http://sourceforge.net/projects/mpzaprcl/" rel="nofollow">mpz_aprcl</a>.</p></li> <li><p><a href="https://en.wikipedia.org/wiki/Elliptic_curve_primality" rel="nofollow">ECPP</a>. Fastest method for large inputs not of special form. <a href="http://www.ellipsa.eu/public/primo/primo.html" rel="nofollow">Primo</a> (free to use and the gold standard), <a href="http://sti15.com/nt/ecpp-dj.tar.gz" rel="nofollow">ecpp-dj</a> (open source). This uses randomization, so it isn't deterministic in some sense, but it is 100% correct, which is what many people mean in this context. It also can provide a certificate for fast third-party validation, making it especially attractive.</p></li> <li><p><a href="https://en.wikipedia.org/wiki/AKS_primality_test" rel="nofollow">AKS</a>. <em>Horrendously</em> slow. Theoretical breakthrough and fascinatingly simple math, but practically useless. It is faster than trial division at 20 or so digits, and eventually will pass the BLS75 methods, but it's nowhere close to the methods we usually use: APR-CL or ECPP. Various implementations exist, with the fastest I'm aware of being in <a href="http://sti15.com/nt/ecpp-dj.tar.gz" rel="nofollow">ecpp-dj</a> and <a href="https://github.com/danaj/Math-Prime-Util/blob/master/aks.c" rel="nofollow">Perl/ntheory</a> [caveat: I'm the author]. Polynomial time, but the exponent is higher than APR-CL for inputs under a quadrillion or so digits (ridiculously large sizes).</p></li> </ul>
2015-10-24 07:21:53.580000+00:00
2015-10-24 20:39:34.977000+00:00
2015-10-24 20:39:34.977000+00:00
null
33,299,118
<p>And also is this method a deterministic method, written below:</p> <pre><code>bool isPrime(int a){ if( a &lt;= 0) return false; if( a == 1) return false; if( a == 2) return true; if( a == 3) return true; int sqr = sqrt(a)+1; if( a%2 == 0) return false; for(int i=3;i&lt;=sqr;i+=2){ if( a%i == 0 ) return false; } return true; } </code></pre>
2015-10-23 09:33:38.077000+00:00
2015-10-24 20:39:34.977000+00:00
null
primes|discrete-mathematics|number-theory
['https://en.wikipedia.org/wiki/Baillie%E2%80%93PSW_primality_test', 'https://en.wikipedia.org/wiki/Miller%E2%80%93Rabin_primality_test#Deterministic_variants_of_the_test', 'https://miller-rabin.appspot.com/', 'http://arxiv.org/abs/1509.00864', 'http://ceur-ws.org/Vol-1326/020-Forisek.pdf', 'https://github.com/danaj/Math-Prime-Util/blob/master/primality.c', 'https://en.wikipedia.org/wiki/Trial_division', 'http://probableprime.org/images/primality-times.png', 'http://www.ams.org/journals/mcom/1975-29-130/S0025-5718-1975-0384673-1/', 'https://en.wikipedia.org/wiki/Pocklington_primality_test', 'https://en.wikipedia.org/wiki/Adleman%E2%80%93Pomerance%E2%80%93Rumely_primality_test', 'http://pari.math.u-bordeaux.fr/', 'http://sourceforge.net/projects/mpzaprcl/', 'https://en.wikipedia.org/wiki/Elliptic_curve_primality', 'http://www.ellipsa.eu/public/primo/primo.html', 'http://sti15.com/nt/ecpp-dj.tar.gz', 'https://en.wikipedia.org/wiki/AKS_primality_test', 'http://sti15.com/nt/ecpp-dj.tar.gz', 'https://github.com/danaj/Math-Prime-Util/blob/master/aks.c']
19
29,796,148
<p>Initial population plays an important role in heuristic algorithms such as GA as it help to decrease the time those algorithms need to achieve an acceptable result. Furthermore, it may influence the quality of the final answer given by evolutionary algorithms. (<a href="http://arxiv.org/pdf/1406.4518.pdf" rel="nofollow">http://arxiv.org/pdf/1406.4518.pdf</a>)</p> <p>so as you know about Koza's different population methods, you must also remember that each algorithm that is used, isn't 100% random, nor can it be as an algorithm can be used. therefore you can predict what the next value will be. another method you could potentially use is something called Uniform initialisation (refer to the free pdf : "a field guide to genetic programming"). the idea of this is that initially, when a population is created, within a few generations, due to crossing over and selection, the entire syntax tree could be lost within a few generations. Langdon (2000) came up with the idea of a <em>ramped uniform distribution</em> which effectively allows the user to specify the range of sizes a possible tree can have, and if a permutation of the tree is generated in the search space that doesn't fulfil the range of sizes, then the tree is automatically discarded, regardless of its fitness evaluation value. from here, the ramped uniform distribution will create an equal amount of trees depending on the range that you have used - all of which are random, unique permutations of the functions and terminal values you are using. (again, refer to "a field guide on genetic programming" for more detail) this method can be quite useful in terms of sampling where the desired solutions are asymmetric rather than symmetric (which is what the ramped half and half deal with). </p> <p>other recommeded readings for population initialisation: <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.50.962&amp;rep=rep1&amp;type=pdf" rel="nofollow">http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.50.962&amp;rep=rep1&amp;type=pdf</a></p>
2015-04-22 11:29:29.083000+00:00
2015-04-22 11:29:29.083000+00:00
null
null
12,378,165
<p>Can someone provide me some pointers on population initialization algorithms for genetic programming? </p> <p>I already know about the Grown, Full, Ramped half-half (taken from "A Field Guide to Genetic Programming") and saw one new algorithm <a href="http://www.cs.gmu.edu/~sean/papers/treecreation.pdf" rel="nofollow">Two Fast Tree-Creation</a> (haven't read the paper yet.).</p>
2012-09-11 21:26:50.217000+00:00
2015-04-22 11:29:29.083000+00:00
2012-09-13 16:39:46+00:00
algorithm|artificial-intelligence|genetic-programming
['http://arxiv.org/pdf/1406.4518.pdf', 'http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.50.962&rep=rep1&type=pdf']
2
37,194,133
<p>There are solutions for Convolutional Neural Networks apart from just resizing the input to a fixed size.</p> <p><a href="https://arxiv.org/abs/1406.4729" rel="nofollow">Spatial Pyramid Pooling</a> allows you to train and test CNNs with variable sized images, and it does this by introducing a dynamic pooling layer, where the input can be of any size, and the output is of a fixed size, which then can be fed to the fully connected layers.</p> <p>The pooling is very simple, one defines with a number of regions in each dimension (say 7x7), and then the layer splits each feature map in non-overlapping 7x7 regions and does max-pooling on each region, outputing a 49 element vector. This can also be applied at multiple scales.</p>
2016-05-12 17:49:47.183000+00:00
2016-05-12 17:49:47.183000+00:00
null
null
37,192,415
<p>This question is a tough one: <strong><em>How can I feed a neural network, a dynamic input?</em></strong> </p> <p>Answering this question will certainly help the advance of modern AI using deep learning for applications other than computer vision and speech recognition. I will explain this problem further for the laymen on neural networks.</p> <p><strong>Let's take this simple example for instance:</strong></p> <p>Say you need to know the probability of winning, losing or drawing in a game of "tic-tac-toe".</p> <p>So my <strong>input</strong> could be a [3,3] matrix representing the state (1-You, 2-Enemy, 0-Empty):</p> <pre><code>[2. 1. 0.] [0. 1. 0.] [2. 2. 1.] </code></pre> <p>Let's assume we already have a <strong>previously trained hidden layer</strong>, a [3,1] matrix of weights:</p> <pre><code>[1.5] [0.5] [2.5] </code></pre> <p>So if we use a simple activation function that consists basically of a matrix multiply between the two <strong>y(x)=W*x</strong> we get this [3,1] matrix in the <strong>output</strong>:</p> <pre><code>[2. 1. 0.] [1.5] [3.5] [0. 1. 0.] * [0.5] = [0.5] [2. 2. 1.] [2.5] [6.5] </code></pre> <p>Even without a softmax function you can tell that the highest probability is of having a draw.</p> <p><em><strong>But what if I want this same neural network to work for a 5x5 game of tic-tac-toe?</strong></em></p> <p>It has the same logic as the 3x3, its just bigger. The neural network <strong>should</strong> be able to handle it </p> <p>We would have something like:</p> <pre><code>[2. 1. 0. 2. 0.] [0. 2. 0. 1. 1.] [1.5] [?] [2. 1. 0. 0. 1.] * [0.5] = [?] IMPOSSIBLE [0. 0. 2. 2. 1.] [2.5] [?] [2. 1. 0. 2. 0.] </code></pre> <p>But this multiplication would be <strong>impossible to compute</strong>. We would have to <strong>add more layers and/or change our previously trained one</strong> and <strong>RETRAIN</strong> it, because the untrained weights (initialized with 0 in this case) would cause the neural network to fail, like so:</p> <pre><code> input 1st Layer output1 [2. 1. 0. 2. 0.] [0. 0. 0.] [6.5 0. 0.] [0. 2. 0. 1. 1.] [1.5 0. 0.] [5.5 0. 0.] [2. 1. 0. 0. 1.] * [0.5 0. 0.] = [1.5 0. 0.] [0. 0. 2. 2. 1.] [2.5 0. 0.] [6. 0. 0.] [2. 1. 0. 2. 0.] [0. 0. 0.] [6.5 0. 0.] 2nd Layer output1 final output [6.5 0. 0.] [5.5 0. 0.] [0. 0. 0. 0. 0.] * [1.5 0. 0.] = [0. 0. 0.] POSSIBLE [6. 0. 0.] [6.5 0. 0.] </code></pre> <p>Because we expanded the first layer and added a new layer of zero weights, our <strong>result is obviously inconclusive</strong>. If we apply a softmax function we will realize that the neural network is returning 33.3% chance for every possible outcome. <strong>We would need to train it again</strong>.</p> <p>Obviously we want to create generic neural networks that can adapt to different input sizes, however I haven't thought of a solution for this problem yet! So I thought maybe stackoverflow can help. Thousands of heads think better than one. <strong>Any ideas?</strong></p>
2016-05-12 16:18:33.400000+00:00
2018-04-11 20:25:55.687000+00:00
2018-04-11 20:25:55.687000+00:00
neural-network|deep-learning|matrix-multiplication|max-pooling
['https://arxiv.org/abs/1406.4729']
1
67,487,935
<p>Consider this simple function with no loops at all:</p> <pre><code>bool function (int a, int b, int c) { bool answer = a*a*a + b*b*b + c*c*c != 42; // POSTCONDITION: require answer==true } </code></pre> <p>It took a few hundred computer-years (or, in real time, a couple of weeks on a 500,000-strong grid of PCs) to discover that there is a <a href="https://arxiv.org/pdf/2007.01209.pdf" rel="nofollow noreferrer">counterexample</a> (pdf):</p> <blockquote> <p>(−80538738812075974)<sup>3</sup> + 80435758145817515<sup>3</sup> + 12602123297335631<sup>3</sup> = 42</p> </blockquote> <p>A program that can do that very fast is a very, very good program indeed.</p>
2021-05-11 13:37:42.680000+00:00
2021-05-11 13:37:42.680000+00:00
null
null
67,486,164
<p>I have a function in a programming language, e.g. C. I require the output of the function to meet a certain condition. If there is some input to this function for which the output does not meet the required condition, I need to find any such exact input.</p> <p>I need to do this in general, but for rather simple functions, e.g. the number of the loops is fixed and does not depend on the input. Another requirement is that I need to do this very fast. I found that CBMC tool [https://www.cprover.org/cbmc/] may help me, but I am not sure how to use it. I also welcome solutions which convert the problem into the CNF formula (but I still need to retrieve the counterexample input).</p> <p>An example of the function:</p> <pre><code>int function(int n) { int m = 0; for(int i = 1; i &lt; 8; i++) { m += n*i; } int output = m % 11; return output; } // POSTCONDITION: require the output &lt; 10 for all inputs // VERIFICATION: this is not true, the counterexample is the input n=9. </code></pre>
2021-05-11 11:50:28.150000+00:00
2021-05-11 13:37:42.680000+00:00
2021-05-11 12:29:18.670000+00:00
c|testing|formal-verification|software-quality|post-conditions
['https://arxiv.org/pdf/2007.01209.pdf']
1
56,190,762
<p>In short, </p> <ol> <li>Learn a fixed-length embedding (representation) of each event;</li> <li>Learn a way to combine a sequence of such embeddings into a single representation for each event, then use your favorite unsupervised methods.</li> </ol> <p>For (1), you can do it either manually or use an encoder/decoder; For (2), there is a range of things you can do, ranging from just simply averaging embeddings from each event, to training an <a href="https://towardsdatascience.com/understanding-encoder-decoder-sequence-to-sequence-model-679e04af4346" rel="nofollow noreferrer">encoder-decoder</a> on reconstructing the original sequence of events and take the intermediate representation (that the decoder uses to reconstruct the original sequence). </p> <p>A good read on this topic (though a bit old; you now also have the option of <a href="https://arxiv.org/abs/1706.03762" rel="nofollow noreferrer">Transformer Network</a>):</p> <p><a href="https://nlp.stanford.edu/manning/talks/Simons-Institute-Manning-2017.pdf" rel="nofollow noreferrer">Representations for Language: From Word Embeddings to Sentence Meanings</a></p>
2019-05-17 17:05:14.097000+00:00
2019-05-17 17:05:14.097000+00:00
null
null
56,176,156
<p>I am working to setup data for an unsupervised learning algorithm. The goal of the project is to group (cluster) different customers together based on their behavior on the website. Obviously, some sort of clustering algorithm is best for discovering patterns in the data we can't see as humans.</p> <p>However, the database contains multiple rows for each customer (in chronological order) for each action the customer took on the website for that visit. For example customer with ID# 123 clicked on page 1 at time X and that would be a row in the database, and then the same customer clicked another page at time Y. That would make another row in the database. </p> <p>My question is what algorithm or approach would you use for clustering in this given scenario? K-means is really popular for this type of problem, but I don't know if it's possible to use in this situation because of the grouping. Is it somehow possible to do cluster analysis around one specific ID that includes multiple rows?</p> <p>Any help/direction of unsupervised learning I should take is appreciated.</p>
2019-05-16 20:28:36.223000+00:00
2019-05-17 17:05:14.097000+00:00
2019-05-16 20:54:57.653000+00:00
machine-learning|cluster-analysis|k-means|data-analysis|unsupervised-learning
['https://towardsdatascience.com/understanding-encoder-decoder-sequence-to-sequence-model-679e04af4346', 'https://arxiv.org/abs/1706.03762', 'https://nlp.stanford.edu/manning/talks/Simons-Institute-Manning-2017.pdf']
3
58,397,931
<p>The function <code>hypot</code> offers <strong>another</strong> approximation of the mathematical expression √(x<sup>2</sup> + y<sup>2</sup>), just like the floating-point expression <code>sqrt(x*x + y*y)</code> is an approximation of this same mathematical expression.</p> <p>The function <code>hypot</code> is recommended because it solves very noticeable defects that are present in the floating-point computation <code>sqrt(x*x + y*y)</code> with very large or small values. For instance, if <code>x</code> is only a bit larger than the square root of the maximum finite floating-point value, <code>sqrt(x*x + y*y)</code> always produces <code>+inf</code> because <code>x*x</code> produces <code>+inf</code>.</p> <p>Compare:</p> <pre><code>&gt;&gt;&gt; x, y = 95E200, 168E200 &gt;&gt;&gt; sqrt(x*x + y*y), hypot(x, y) (inf, 1.93e+202) &gt;&gt;&gt; z, t = 95E-200, 168E-200 &gt;&gt;&gt; sqrt(z*z + t*t), hypot(z, t) (0.0, 1.93e-198) </code></pre> <p>For these two (respectively very large and very small) pairs of inputs, <code>hypot</code> is doing fine, whereas <code>sqrt(x*x + y*y)</code> is catastrophically wrong.</p> <hr> <p>When the naïve version <code>sqrt(x*x + y*y)</code> works reasonably well (when the values <code>x</code> and <code>y</code> are neither very large nor very small), it can be may be more or less accurate than the function <code>hypot</code> depending on the values of <code>x</code> and <code>y</code>. They can both be expected to produce a result that is a few ULPs away from the mathematical result. But since they are different approximations obtained by different methods, they may differ (in the worst case by twice “a few ULPs”).</p> <p>One typical implementation for <code>hypot(x, y)</code> is first to swap <code>x</code> and <code>y</code> if necessary so that <code>x</code> has the largest magnitude, and then compute <code>x * sqrt(1 + (y/x)*(y/x))</code>. This solves the problem with <code>x*x</code> overflowing. As a side-effect, it means that <strong>even when there is no overflow</strong>, the result is slightly different from <code>sqrt(x*x + y*y)</code>.</p> <p>Note that it's normal that <code>sqrt(x*x + y*y)</code> is more precise when you apply it to small integers (as you do in your test): when <code>x</code> and <code>y</code> are small integers, <code>x*x</code> and <code>y*y</code> and their sum can be computed exactly as floating-point values. If this sum is the square of an integer, the floating-point function <code>sqrt</code> can only compute this integer. In short, in this scenario the computations, despite being floating-point, are exact from beginning to end. In contrast, the typical <code>hypot</code> implementation above starts by computing <code>x/y</code> (in your test, <code>95.0/168.0</code>), and this result is not in general representable exactly as a floating-point value. The first step already incurs an approximation, and this approximation can result in the final result being wrong (as it is in your test)!</p> <hr> <p>There is no standard algorithm for <code>hypot</code>: it is only expected to compute a good approximation of the mathematical expression √(x<sup>2</sup> + y<sup>2</sup>) while avoiding the overflow and underflow problems. <a href="https://arxiv.org/pdf/1904.09481.pdf" rel="nofollow noreferrer">This article</a> shows different implementations, and points out that the popular implementation that I mentioned sacrifices accuracy to avoid overflow and underflow (but the article also provides a floating-point implementation for <code>hypot</code> that is <strong>more accurate</strong> than <code>sqrt(x*x + y*y)</code> even where <code>sqrt(x*x + y*y)</code> works).</p>
2019-10-15 15:16:09.870000+00:00
2019-10-16 20:43:18.507000+00:00
2019-10-16 20:43:18.507000+00:00
null
58,397,779
<p>I'm trying to run some tests on the shiny new Python 3.8 and noticed an issue with <a href="https://docs.python.org/3.8/library/math.html#math.hypot" rel="nofollow noreferrer"><code>math.hypot</code></a>. From the docs: </p> <blockquote> <p>For a two dimensional point <code>(x, y)</code>, this is equivalent to computing the hypotenuse of a right triangle using the Pythagorean theorem, <code>sqrt(x*x + y*y)</code>.</p> </blockquote> <p>However, these are not equivalent in 3.8:</p> <pre><code>&gt;&gt;&gt; from math import hypot, sqrt &gt;&gt;&gt; x, y = 95, 168 &gt;&gt;&gt; sqrt(x*x + y*y), hypot(x, y), sqrt(x*x + y*y) == hypot(x, y) (193.0, 193.00000000000003, False) &gt;&gt;&gt; sqrt(x*x + y*y).is_integer(), hypot(x, y).is_integer() (True, False) </code></pre> <p>In 3.7 both ways produce exactly the same result (<code>"193.0"</code>, which is considered an integer).</p>
2019-10-15 15:08:33.970000+00:00
2019-10-16 20:43:18.507000+00:00
2019-10-16 07:20:33.077000+00:00
python|math|floating-point|python-3.8
['https://arxiv.org/pdf/1904.09481.pdf']
1
50,518,472
<p>There is a paper called <a href="https://arxiv.org/pdf/0909.4061.pdf" rel="nofollow noreferrer">Finding Structure in Randomness</a> that address some points about all of these decompositions as well as the SVD which would be covered in <a href="http://bookstore.siam.org/ot50/" rel="nofollow noreferrer">Trefethan and Ba</a>u . </p> <ol> <li>The interpolative decomposition is used in different places. A paper that explores it is <a href="https://arxiv.org/pdf/1412.8447.pdf" rel="nofollow noreferrer">here.</a></li> <li>The U,V are unitary matrices. C is a matrix containing a subset of the columns of A, R a subset of the rows. </li> </ol>
2018-05-24 21:33:20.483000+00:00
2018-05-26 20:48:34.310000+00:00
2018-05-26 20:48:34.310000+00:00
null
49,557,669
<p>I have understood how <a href="https://en.wikipedia.org/wiki/CUR_matrix_approximation" rel="nofollow noreferrer">CUR</a> and <a href="https://en.wikipedia.org/wiki/Singular-value_decomposition" rel="nofollow noreferrer">SVD</a> works, but not able to understand,<br></p> <ol> <li>How we can use CUR in place of SVD decomposition?</li> <li>Does C and R matrices in CUR follow the same properties as that of U and V matrices in SVD decomposition?</li> </ol> <p>If we want to reduce the dimension of original matrix say from n to k, which matrix of CUR we can use to project original matrix, so that we will get k-dimensional data points.</p>
2018-03-29 13:42:37.797000+00:00
2018-05-26 20:48:34.310000+00:00
2018-03-30 19:29:21.277000+00:00
matrix|linear-algebra|svd
['https://arxiv.org/pdf/0909.4061.pdf', 'http://bookstore.siam.org/ot50/', 'https://arxiv.org/pdf/1412.8447.pdf']
3
66,844,388
<ol> <li><p>Yes, but definition of &quot;average&quot; is important. If you supply a &quot;background&quot; dataset, your explanations will be calculated against this background, not against the whole dataset. As far as &quot;relative to the average&quot; of the background, one needs to understand shap values are average marginal contributions over all possible coalitions. So as far as SHAP values are concerned, you fix coalition(s), and the rest is yes, averaged. This allows fitting model once, and then passing different coalitions (with the rest averaged) through the model that was trained only once. This is where SHAP time savings come from.<br /> If you're interested in more you may visit original <a href="https://arxiv.org/abs/1705.07874" rel="nofollow noreferrer">paper</a> or this <a href="https://ai.plainenglish.io/understanding-shap-for-interpretable-machine-learning-35e8639d03db" rel="nofollow noreferrer">blog</a>.</p> </li> <li><p>Yes. You supply a single data row as background, for a binary classification e.g., supply another class' data row for explanation, and see which feature, and by how much, changed class output.</p> </li> </ol>
2021-03-28 17:48:06.173000+00:00
2021-03-28 18:03:53.050000+00:00
2021-03-28 18:03:53.050000+00:00
null
66,805,128
<p>I am trying to explain a regression model based on LightGBM using <code>SHAP</code>. I'm using the</p> <pre><code>shap.TreeExplainer(&lt;lightgbm model&gt;).shap_values(X) </code></pre> <p>method to get the <code>SHAP</code> values, where X is the entire training dataset. These <code>SHAP</code> values give me comparison of an individual prediction, compared to the average prediction of the entire dataset.</p> <p>In the online book by Christopher Molnar, section 5.9.4, he mentions that:</p> <blockquote> <p>&quot;Instead of comparing a prediction to the average prediction of the entire dataset, you could compare it to a subset or even to a single data point.&quot;</p> </blockquote> <p>I have a couple of questions regarding this:</p> <ol> <li>Am I correct to interpret that if, instead of passing the entire training dataset, I pass a subset of say 20 observations, then the SHAP values returned will be relative to the average of these 20 observations? This will be the equivalent of &quot;subset&quot; that Christopher Molnar mentioned in his book</li> <li>Assuming that the answer to question 1 is yes, what if, instead of generating SHAP values relative to the average of 20 observations, I want to generate SHAP values relative to one specific observation. Christopher Molnar seems to imply that is possible. If it is possible, how do I do that?</li> </ol> <p>Thank you in advance for the guidance!</p>
2021-03-25 17:59:34.053000+00:00
2022-05-03 08:42:13.073000+00:00
2022-05-03 08:42:13.073000+00:00
python|shap
['https://arxiv.org/abs/1705.07874', 'https://ai.plainenglish.io/understanding-shap-for-interpretable-machine-learning-35e8639d03db']
2
19,837,238
<p>"<a href="http://arxiv.org/abs/cs.PF/0502012" rel="nofollow">Sequential File Programming Patterns and Performance with .NET</a>" is a great article in I/O performance improvement.</p> <p>In page 8 of <a href="http://arxiv.org/pdf/cs/0502012v1" rel="nofollow">this</a> PDF file, it shows that the bandwidth for buffer size bigger than eight bytes, is constant. Consider that the article has been written in 2004 and the hard disk drive is "<em>Maxtor 250 GB 7200 RPM SATA disk</em>" and the result should be different by latest I/O technologies.</p> <p>If you are looking for the best performance take a look at <a href="http://www.pinvoke.net/default.aspx/kernel32.ReadFile" rel="nofollow">pinvoke.net</a> or the page 9 of the PDF file, the un-buffered file performance measurements shows better results:</p> <blockquote> <p>In un-buffered I/O, the disk data moves directly between the application’s address space and the device without any intermediate copying.</p> </blockquote> <h2>Summary</h2> <ul> <li>For single disks, use the defaults of the .NET framework – they deliver excellent performance for sequential file access.</li> <li>Pre-allocate large sequential files (using the SetLength() method) when the file is created. This typically improves speed by about 13% when compared to a fragmented file.</li> <li>At least for now, disk arrays require un-buffered I/O to achieve the highest performance - buffered I/O can be eight times slower than un-buffered I/O. We expect this problem will be addressed in later releases of the .NET framework.</li> <li>If you do your own buffering, use large request sizes (64&nbsp;KB is a good place to start). Using the .NET framework, a single processor can read and write a disk array at over 800 Mbytes/s using un-buffered I/O.</li> </ul>
2013-11-07 13:29:18.727000+00:00
2015-06-26 18:26:15.213000+00:00
2015-06-26 18:26:15.213000+00:00
null
19,558,435
<p>I'm reading binary files and here is a sample:</p> <pre><code>public static byte[] ReadFully(Stream input) { byte[] buffer = new byte[16*1024]; int read; while ((read = input.Read(buffer, 0, buffer.Length)) &gt; 0) { ...... } } </code></pre> <p>Obviously the buffer size (16*1024) has a great role in performance. I've read that it depends on the I/O technology (<a href="http://en.wikipedia.org/wiki/Serial_ATA" rel="nofollow noreferrer">SATA</a>, <a href="http://en.wikipedia.org/wiki/Solid-state_drive" rel="nofollow noreferrer">SSD</a>, <a href="http://en.wikipedia.org/wiki/SCSI" rel="nofollow noreferrer">SCSI</a>, etc.) and also the fragment size of the partition which file exists on it (we can define during the formatting the partition).</p> <p>But here is the question: <strong>Is there any formula or best practice to define the buffer size?</strong> Right now, I'm defining based on trial-and-error.</p> <p><strong>Edit:</strong> I've tested the application on my server with different buffer sizes, and I get the best performance with 4095*256*16 (16&nbsp;MB)!!! 4096 is 4 seconds slower.</p> <p>Here are some older posts which are very helpful but I can't still get the reason:</p> <ul> <li><p><em><a href="https://stackoverflow.com/questions/1238388/faster-unsafe-binaryreader-in-net">Faster (unsafe) BinaryReader in .NET</a></em></p></li> <li><p><em><a href="https://stackoverflow.com/questions/1552107/optimum-file-buffer-read-size">Optimum file buffer read size?</a></em></p></li> <li><p><em><a href="https://stackoverflow.com/questions/3033771/file-io-with-streams-best-memory-buffer-size">File I/O with streams - best memory buffer size</a></em></p></li> <li><p><em><a href="https://stackoverflow.com/questions/236861/how-do-you-determine-the-ideal-buffer-size-when-using-fileinputstream">How do you determine the ideal buffer size when using FileInputStream?</a></em></p></li> </ul>
2013-10-24 06:22:21.553000+00:00
2015-06-26 18:28:21.660000+00:00
2017-05-23 12:17:02.420000+00:00
c#|.net|windows|performance|filesystems
['http://arxiv.org/abs/cs.PF/0502012', 'http://arxiv.org/pdf/cs/0502012v1', 'http://www.pinvoke.net/default.aspx/kernel32.ReadFile']
3
55,161,358
<p>Having <code>100%</code> accuracy on train dataset while having <code>80%</code> accuracy on test dataset doesn't mean that your model overfits. Moreover, it almost surely doesn't overfit if your model is equipped with much more effective parameters that the number of training samples <code>[2]</code>, <code>[5]</code> (insanely large model example <code>[1]</code>). This contradicts to conventional statistical learning theory, but these are empirical results.</p> <p>For models with number of parameters greater than number of samples, it's better to continue to optimize the logistic or cross-entropy loss even after the training error is zero and the training loss is extremely small, and even if the validation loss increases <code>[3]</code>. This may hold even regardless of batch size <code>[4]</code>.</p> <h3>Clarifications (edit)</h3> <ul> <li>The "models" I was referring to are neural networks with two or more hidden layers (could be also convolutional layers prior to dense layers).</li> <li><code>[1]</code> is cited to show a clear contradiction to classical statistical learning theory, which says that large models may overfit without some form of regularization.</li> <li>I would invite anyone who disagrees with <em>"almost surely doesn't overfit"</em> to provide a reproducible example where models, say for MNIST/CIFAR etc with few hundred thousand parameters do overfit (in a sense of increasing with iterations test error curve).</li> </ul> <p><code>[1]</code> <a href="https://arxiv.org/pdf/1701.06538.pdf" rel="nofollow noreferrer">Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc V. Le,Geoffrey E. Hinton, and Jeff Dean. Outrageously large neural networks: Thesparsely-gated mixture-of-experts layer.CoRR, abs/1701.06538, 2017.</a></p> <p><code>[2]</code> <a href="https://www.padl.ws/papers/Paper%2010.pdf" rel="nofollow noreferrer">Lei Wu, Zhanxing Zhu, et al. Towards understanding generalization of deep learn-ing: Perspective of loss landscapes.arXiv preprint arXiv:1706.10239, 2017.</a></p> <p><code>[3]</code> <a href="https://arxiv.org/pdf/1710.10345.pdf" rel="nofollow noreferrer"> Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Suriya Gunasekar, and NathanSrebro. The implicit bias of gradient descent on separable data.The Journal of Machine Learning Research, 19(1):2822–2878, 2018.</a></p> <p><code>[4]</code> <a href="https://arxiv.org/pdf/1705.08741.pdf" rel="nofollow noreferrer"> Elad Hoffer, Itay Hubara, and Daniel Soudry. Train longer, generalize better: clos-ing the generalization gap in large batch training of neural networks. InAdvancesin Neural Information Processing Systems, pages 1731–1741, 2017.</a>`</p> <p><code>[5]</code> <a href="https://arxiv.org/pdf/1611.03530.pdf" rel="nofollow noreferrer">Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals.Understanding deep learning requires rethinking generalization.arXiv preprintarXiv:1611.03530, 2016.</a></p>
2019-03-14 11:27:46.413000+00:00
2019-03-14 13:52:59.347000+00:00
2019-03-14 13:52:59.347000+00:00
null
55,157,832
<pre><code>classifier.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) classifier.fit(X_train, y_train, epochs=50, batch_size=100) Epoch 1/50 27455/27455 [==============================] - 3s 101us/step - loss: 2.9622 - acc: 0.5374 </code></pre> <p>I know I'm compiling my model in first line and fitting it in second. I know what is optimiser. I'm interested the meaning of <code>metrics=['accuracy']</code> and what does the <code>acc: XXX</code> exactly mean when I compile the model. Also, I'm getting <code>acc : 1.000</code> when I train my model (100%) but when I test my model I'm getting 80% accuracy. Does my model overfitting? </p>
2019-03-14 08:21:42.827000+00:00
2019-03-14 16:29:32.493000+00:00
2019-03-14 08:30:19.487000+00:00
python|tensorflow|machine-learning|keras|deep-learning
['https://arxiv.org/pdf/1701.06538.pdf', 'https://www.padl.ws/papers/Paper%2010.pdf', 'https://arxiv.org/pdf/1710.10345.pdf', 'https://arxiv.org/pdf/1705.08741.pdf', 'https://arxiv.org/pdf/1611.03530.pdf']
5
48,728,126
<p>Firstly, I conclude you <a href="http://rextester.com/JDME53734" rel="nofollow noreferrer">must be using MSVC</a> since your code is not valid standard C++.</p> <p>Fixing things shows it appears to work properly on <a href="http://coliru.stacked-crooked.com/a/15a3428f1a5f7eff" rel="nofollow noreferrer">Clang</a> and <a href="http://coliru.stacked-crooked.com/a/092f68e30b8cfaf8" rel="nofollow noreferrer">GCC</a>.</p> <blockquote> <p>Enabling AddressSanitizer quickly reveals errors that I /believe/ might originate in the singleton code deep inside Boost Serialization. I'll ignore it for now.</p> </blockquote> <p>Due to those errors, I looked at this a long time, trying to see whether the code was to blame.</p> <p>While doing so, I found that many things could be made a lot simpler.</p> <ul> <li><p>you can do without construct-data if you just add <em>private</em> default constructors for serialization purposes.</p> <p>The important thing about your classes are the invariants, and it's easy to prove that invariants are kept across deserialization this way.</p></li> <li><p>you can do without the subclasses for each binary operator, because they add literally no behaviour. Consider providing the <code>message</code> constant out-of-band</p></li> <li><p>instead of making a <code>vector&lt;unique_ptr&gt;</code> you can use a pointer container that implicitly owns its elements. This makes e.g. looking up equivalent pointers a <em>lot</em> easier:</p> <pre><code>namespace memory_management { struct address_less { bool operator()(Expr const&amp; a, Expr const&amp; b) const { return &amp;a &lt; &amp;b; } }; static boost::ptr_set&lt;Expr, address_less&gt; pool; } void Expr::self_register() { memory_management::pool.insert(this); } </code></pre></li> </ul> <p>All in all things are much shorter:</p> <pre><code>// AST struct Expr { virtual ~Expr() = default; virtual std::vector&lt;Expr const *&gt; children() const { return {}; } virtual std::string identity() const = 0; void print(int p) const { std::cout &lt;&lt; std::setw(p) &lt;&lt; ' '; std::cout &lt;&lt; identity() &lt;&lt; "\n"; for (auto a_kid : children()) { a_kid-&gt;print(p + 2); } } protected: Expr() { self_register(); } private: void self_register(); friend class boost::serialization::access; template &lt;class Archive&gt; void serialize(Archive &amp;, unsigned) {} }; namespace memory_management { struct address_less { bool operator()(Expr const&amp; a, Expr const&amp; b) const { return &amp;a &lt; &amp;b; } }; static boost::ptr_set&lt;Expr, address_less&gt; pool; } void ::Expr::self_register() { memory_management::pool.insert(this); } struct Int : Expr { std::string identity() const override { return "Int[" + std::to_string(n) + "]@"; } Int(int nn) : n(nn) {} private: int const n = 0; friend class boost::serialization::access; Int() = default; template &lt;class Archive&gt; void serialize(Archive &amp;ar, unsigned) { ar &amp; boost::serialization::base_object&lt;Expr&gt;(*this) &amp; const_cast&lt;int&amp;&gt;(n); } }; namespace Tags { struct Mul; struct Div; struct Plus; struct Minus; template &lt;typename T&gt; constexpr char const* const message = "Unknown"; template &lt;&gt; constexpr char const* const message&lt;Mul&gt; = "Mul"; template &lt;&gt; constexpr char const* const message&lt;Div&gt; = "Div"; template &lt;&gt; constexpr char const* const message&lt;Plus&gt; = "Plus"; template &lt;&gt; constexpr char const* const message&lt;Minus&gt; = "Minus"; } template &lt;class T&gt; struct Op2 : Expr { std::vector&lt;Expr const *&gt; children() const override { return { l, r }; } std::string identity() const override { return Tags::message&lt;T&gt;; } Op2(Expr *ll, Expr *rr) : l(ll), r(rr) {} protected: friend class boost::serialization::access; Op2() = default; Expr *l = nullptr; Expr *r = nullptr; template &lt;class Archive&gt; void serialize(Archive &amp;ar, unsigned) { ar &amp; boost::serialization::base_object&lt;Expr&gt;(*this) &amp; l &amp; r; } }; using Mul = Op2&lt;Tags::Mul&gt;; using Div = Op2&lt;Tags::Div&gt;; using Plus = Op2&lt;Tags::Plus&gt;; using Minus = Op2&lt;Tags::Minus&gt;; </code></pre> <h2>Bonus: A DSL For AST Building</h2> <p>I thought it would be nicer to be able to say:</p> <pre><code>Expr const* root((as_expr(3) * 2) + 5 + (as_expr(7) / 25)); </code></pre> <p>So, let's do that:</p> <pre><code>namespace builder { struct Atom { Atom(Expr* expr) : expr(expr) {} Atom(int i) : expr(new Int(i)) {} Expr* expr; explicit operator Expr const*() const { return expr; } }; template &lt;typename T&gt; Atom as_expr(T&amp;&amp; v) { return std::forward&lt;T&gt;(v); } Atom operator+(Atom a, Atom b) { return new Plus(a.expr, b.expr); } Atom operator-(Atom a, Atom b) { return new Minus(a.expr, b.expr); } Atom operator*(Atom a, Atom b) { return new Mul(a.expr, b.expr); } Atom operator/(Atom a, Atom b) { return new Div(a.expr, b.expr); } } </code></pre> <h2>LIVE DEMO</h2> <p><strong><kbd><a href="http://coliru.stacked-crooked.com/a/18cc9cff08b370cb" rel="nofollow noreferrer">Live On Coliru</a></kbd></strong></p> <pre><code>#include &lt;boost/archive/text_iarchive.hpp&gt; #include &lt;boost/archive/text_oarchive.hpp&gt; #include &lt;boost/serialization/base_object.hpp&gt; #include &lt;boost/serialization/export.hpp&gt; #include &lt;boost/serialization/serialization.hpp&gt; #include &lt;boost/ptr_container/ptr_set.hpp&gt; #include &lt;fstream&gt; #include &lt;iomanip&gt; #include &lt;iostream&gt; #include &lt;string&gt; #include &lt;vector&gt; // AST struct Expr { virtual ~Expr() = default; virtual std::vector&lt;Expr const *&gt; children() const { return {}; } virtual std::string identity() const = 0; void print(int p) const { std::cout &lt;&lt; std::setw(p) &lt;&lt; ' '; std::cout &lt;&lt; identity() &lt;&lt; "\n"; for (auto a_kid : children()) { a_kid-&gt;print(p + 2); } } protected: Expr() { self_register(); } private: void self_register(); friend class boost::serialization::access; template &lt;class Archive&gt; void serialize(Archive &amp;, unsigned) {} }; namespace memory_management { struct address_less { bool operator()(Expr const&amp; a, Expr const&amp; b) const { return &amp;a &lt; &amp;b; } }; static boost::ptr_set&lt;Expr, address_less&gt; pool; } void ::Expr::self_register() { memory_management::pool.insert(this); } struct Int : Expr { std::string identity() const override { return "Int[" + std::to_string(n) + "]@"; } Int(int nn) : n(nn) {} private: int const n = 0; friend class boost::serialization::access; Int() = default; template &lt;class Archive&gt; void serialize(Archive &amp;ar, unsigned) { ar &amp; boost::serialization::base_object&lt;Expr&gt;(*this) &amp; const_cast&lt;int&amp;&gt;(n); } }; namespace Tags { struct Mul; struct Div; struct Plus; struct Minus; template &lt;typename T&gt; constexpr char const* const message = "Unknown"; template &lt;&gt; constexpr char const* const message&lt;Mul&gt; = "Mul"; template &lt;&gt; constexpr char const* const message&lt;Div&gt; = "Div"; template &lt;&gt; constexpr char const* const message&lt;Plus&gt; = "Plus"; template &lt;&gt; constexpr char const* const message&lt;Minus&gt; = "Minus"; } template &lt;class T&gt; struct Op2 : Expr { std::vector&lt;Expr const *&gt; children() const override { return { l, r }; } std::string identity() const override { return Tags::message&lt;T&gt;; } Op2(Expr *ll, Expr *rr) : l(ll), r(rr) {} protected: friend class boost::serialization::access; Op2() = default; Expr *l = nullptr; Expr *r = nullptr; template &lt;class Archive&gt; void serialize(Archive &amp;ar, unsigned) { ar &amp; boost::serialization::base_object&lt;Expr&gt;(*this) &amp; l &amp; r; } }; using Mul = Op2&lt;Tags::Mul&gt;; using Div = Op2&lt;Tags::Div&gt;; using Plus = Op2&lt;Tags::Plus&gt;; using Minus = Op2&lt;Tags::Minus&gt;; namespace builder { struct Atom { Atom(Expr* expr) :expr(expr){} Atom(int i) :expr(new Int(i)){} Expr* expr; explicit operator Expr const*() const { return expr; } }; template &lt;typename T&gt; Atom as_expr(T&amp;&amp; v) { return std::forward&lt;T&gt;(v); } Atom operator+(Atom a, Atom b) { return new Plus(a.expr, b.expr); } Atom operator-(Atom a, Atom b) { return new Minus(a.expr, b.expr); } Atom operator*(Atom a, Atom b) { return new Mul(a.expr, b.expr); } Atom operator/(Atom a, Atom b) { return new Div(a.expr, b.expr); } } BOOST_CLASS_EXPORT(Expr) BOOST_CLASS_EXPORT(Int) BOOST_CLASS_EXPORT(Mul) BOOST_CLASS_EXPORT(Div) BOOST_CLASS_EXPORT(Plus) BOOST_CLASS_EXPORT(Minus) int main() { std::cout &lt;&lt; std::unitbuf; { using builder::as_expr; Expr const* root((as_expr(3) * 2) + 5 + (as_expr(7) / 25)); root-&gt;print(2); std::ofstream of("arxiv"); boost::archive::text_oarchive oa(of); oa &lt;&lt; root; } std::cout &lt;&lt; "===================\n"; { std::ifstream isf("arxiv"); boost::archive::text_iarchive is(isf); Expr *expr = nullptr; is &gt;&gt; expr; expr-&gt;print(2); } memory_management::pool.clear(); // no memory leaks } </code></pre> <p>Prints</p> <pre><code> Plus Plus Mul Int[3]@ Int[2]@ Int[5]@ Div Int[7]@ Int[25]@ =================== Plus Plus Mul Int[3]@ Int[2]@ Int[5]@ Div Int[7]@ Int[25]@ </code></pre>
2018-02-11 03:57:55.713000+00:00
2018-02-11 03:57:55.713000+00:00
null
null
48,721,255
<p>Context: I have a tree-like structure representing a AST of Expr that I want to serialize using <code>boost::serialization</code>. The main issue is that all classes have non default constructors and const children. To overcome this issue, I followed the doc and overloaded <code>load_construct_data</code> and <code>save_construct_data</code> (which end up doing all the work).</p> <p>My question is about the <code>Mul</code> class within the code. To factorize code, I developed a template class <code>Op2</code> that is used to define operators such as <em>Add</em> or <code>Mul</code> (only <code>Mul</code> is shown here) through CRTP on these classes. In <code>Mul::serialize</code>, I directly register <code>Expr</code> as base class of <code>Mul</code>and completely skip <code>Op2</code>. <strong>The code works, valgrind is happy, but is it correct ? Or does boost::serialization require to ave the complete class hierarchy ?</strong></p> <pre><code>#include &lt;boost/archive/text_iarchive.hpp&gt; #include &lt;boost/archive/text_oarchive.hpp&gt; #include &lt;boost/serialization/base_object.hpp&gt; #include &lt;boost/serialization/export.hpp&gt; #include &lt;boost/serialization/serialization.hpp&gt; #include &lt;fstream&gt; #include &lt;iomanip&gt; #include &lt;iostream&gt; #include &lt;map&gt; #include &lt;memory&gt; #include &lt;sstream&gt; #include &lt;string&gt; #include &lt;vector&gt; //forward declaration of my structs struct Expr; struct Mul; struct Int; //forward declarations of custom boost functions to friend them in the class namespace b_ser = boost::serialization; namespace boost { namespace serialization { template &lt;class Archive&gt; void load_construct_data(Archive &amp;ar, Mul *e, const unsigned int); template &lt;class Archive&gt; void save_construct_data(Archive &amp;ar, const Mul *a, const unsigned int); template &lt;class Archive&gt; void load_construct_data(Archive &amp;ar, Int *e, const unsigned int); template &lt;class Archive&gt; void save_construct_data(Archive &amp;ar, const Int *a, const unsigned int); } // namespace serialization } // namespace boost //memory manager std::vector&lt;std::unique_ptr&lt;Expr&gt;&gt; pool; // AST struct Expr { virtual ~Expr() {} virtual std::vector&lt;Expr const *&gt; children() const = 0; virtual std::string identity() const = 0; void print(int p) const { std::cout &lt;&lt; std::setw(p) &lt;&lt; ' '; std::cout &lt;&lt; identity() &lt;&lt; "\n"; for (auto a_kid : children()) { a_kid-&gt;print(p + 2); } } void self_register() const { if (std::find_if(pool.begin(), pool.end(), [this](auto const &amp;stored_ptr) { return this == stored_ptr.get(); }) == pool.end()) { pool.push_back(std::unique_ptr&lt;Expr&gt;(const_cast&lt;Expr *&gt;(this))); } for (auto ptr : children()) { ptr-&gt;self_register(); } } private: friend class boost::serialization::access; template &lt;class Archive&gt; void serialize(Archive &amp;ar, const unsigned int version) {} }; struct Int : Expr { int const n; std::vector&lt;Expr const *&gt; children() const override { return {}; } std::string identity() const override { return "Int[" + std::to_string(n) + "]@"; } Int(int nn) : n(nn) {} template &lt;class Archive&gt; void serialize(Archive &amp;ar, const unsigned int version) { ar &amp;boost::serialization::base_object&lt;Expr&gt;(*this); } template &lt;class Archive&gt; friend void b_ser::save_construct_data(Archive &amp;ar, const Int *i, const unsigned int) { ar &lt;&lt; i-&gt;n; } template &lt;class Archive&gt; friend void b_ser::load_construct_data(Archive &amp;ar, Int *i, const unsigned int) { int n; ar &gt;&gt; n; ::new (i) Int(n); } }; template &lt;class T&gt; struct Op2 : Expr { std::vector&lt;Expr const *&gt; children() const override { return {l, r}; } std::string identity() const override { return T::message; } Op2(Expr const *ll, Expr const *rr) : l(ll), r(rr) {} protected: Expr const *l; Expr const *r; }; struct Mul : Op2&lt;Mul&gt; { using Op2::Op2; static auto const constexpr message = "Mul"; private: friend class boost::serialization::access; template &lt;class Archive&gt; void serialize(Archive &amp;ar, const unsigned int version) { ar &amp;boost::serialization::base_object&lt;Expr&gt;(*this); } template &lt;class Archive&gt; friend void b_ser::save_construct_data(Archive &amp;ar, const Mul *a, const unsigned int) { ar &lt;&lt; a-&gt;l; ar &lt;&lt; a-&gt;r; } template &lt;class Archive&gt; friend void b_ser::load_construct_data(Archive &amp;ar, Mul *e, const unsigned int) { Expr *l, *r; ar &gt;&gt; l; ar &gt;&gt; r; ::new (e) Mul(l, r); e-&gt;self_register(); } }; template &lt;class T, class... Args&gt; T *store(Args... args) { auto to_store = std::make_unique&lt;T&gt;(std::forward&lt;Args&gt;(args)...); auto raw_ptr = to_store.get(); pool.push_back(std::move(to_store)); return raw_ptr; } BOOST_CLASS_EXPORT(Expr) BOOST_CLASS_EXPORT(Int) BOOST_CLASS_EXPORT(Mul) int main(int argc, char *argv[]) { { auto deux = store&lt;Int&gt;(2); auto trois = store&lt;Int&gt;(3); auto m_23 = store&lt;Mul&gt;(trois, deux); auto quatre = store&lt;Int&gt;(4); auto root = store&lt;Mul&gt;(m_23, quatre); Expr *e_root = root; root-&gt;print(2); std::ofstream of("arxiv"); boost::archive::text_oarchive oa(of); oa &lt;&lt; e_root; } std::cout &lt;&lt; "===================" &lt;&lt; "\n"; { std::ifstream isf("arxiv"); boost::archive::text_iarchive is(isf); Expr *expr; is &gt;&gt; expr; expr-&gt;print(2); } return 0; } </code></pre>
2018-02-10 13:22:20.097000+00:00
2018-02-11 03:57:55.713000+00:00
null
c++|c++11|boost|boost-serialization
['http://rextester.com/JDME53734', 'http://coliru.stacked-crooked.com/a/15a3428f1a5f7eff', 'http://coliru.stacked-crooked.com/a/092f68e30b8cfaf8', 'http://coliru.stacked-crooked.com/a/18cc9cff08b370cb']
4
53,980,906
<p>You want to <strong>recognize text of a document containing multiple lines</strong>. There are <strong>two ways</strong> to achieve this:</p> <ol> <li><p><strong>Segment</strong> the document into <strong>lines</strong> as a <strong>pre-processing</strong> step, then feed each segmented line separately into your neural network. If you want to go this way, e.g. read the paper [1] from Bunke and Marti. They essentially count the black-white transitions for each scanline and create a histogram out of it. They use the minimums of the histogram to split the document into individual lines. There are some other methods too to segment a document into lines.</p></li> <li><p><strong>Train</strong> the <strong>neural network</strong> to <strong>implicitly segment</strong> the document into <strong>lines</strong>. You need to add attention to the neural network, such that it can focus on individual lines. Bluche has done some great work towards text recognition on document-level. See the paper [2] and the website [3].</p></li> </ol> <p>[1] Bunke, Marti: The IAM-database: an English sentence database for offline handwriting recognition. Download via Springer</p> <p>[2] Bluche: Joint Line Segmentation and Transcription for End-to-End Handwritten Paragraph Recognition. Download via <a href="https://arxiv.org/abs/1604.08352" rel="noreferrer">https://arxiv.org/abs/1604.08352</a></p> <p>[3] Bluche: Scan, Attend and Read. See <a href="http://www.tbluche.com/scan_attend_read.html" rel="noreferrer">http://www.tbluche.com/scan_attend_read.html</a> and look for "Handwriting Recognition with MDLSTM and CTC" and "The Collapse Layer and its Proposed Replacements"</p>
2018-12-30 19:49:10.187000+00:00
2018-12-30 19:49:10.187000+00:00
null
null
53,928,871
<p>I'm building an OCR. For that I'm using <code>CNN</code>, <code>RNN</code> and <code>CTC</code> Loss Function. My input layer gets image and output layer predicts what's written on that image. Labels are converted into integer.</p> <pre><code>['A', 'B', 'C'] -&gt; A = 0, B = 1, C = 2 </code></pre> <p>If the image is ABC, training label will be 0,1,2 (Single row vector)</p> <p>I'm able to accomplish this on single line. For eg. '<code>ABCDE</code>' is written on an image and model works great. But if the image is</p> <pre><code>'ABC' 'CAB' </code></pre> <p>then what should be the training label ? How can I tell the model about next line ? I want to train a model on multiple line.</p>
2018-12-26 07:31:32.460000+00:00
2019-03-06 02:58:00.140000+00:00
2018-12-26 08:50:19.493000+00:00
python|tensorflow|keras|ocr
['https://arxiv.org/abs/1604.08352', 'http://www.tbluche.com/scan_attend_read.html']
2
38,857,494
<p>Usually, if one wants to obtain embeddings from a query or a sentence exploiting RNN, the logits are used. The logits are simply the output values of the network after the forward pass of the full sentence/query. </p> <p>The logit values produce a vector that has the dimensions of the output layer (i.e. number of the target classes): usually, it is the vocabulary, since they are extracted from a language model.</p> <p>For hints have a look at these:</p> <ul> <li><a href="http://arxiv.org/abs/1603.07012" rel="nofollow noreferrer">http://arxiv.org/abs/1603.07012</a></li> <li><a href="https://stackoverflow.com/questions/38738821/how-does-word2vec-give-one-hot-word-vector-from-the-embedding-vector">How does word2vec give one hot word vector from the embedding vector?</a></li> </ul> <p>Note that in principle one could use also use bidirectional networks or networks trained on other tasks, obtaining smaller embeddings, even if this last option is kind of fancy and it has not been explored up to my knowledge.</p>
2016-08-09 17:58:06.457000+00:00
2016-08-09 18:56:11.827000+00:00
2017-05-23 12:00:56.870000+00:00
null
38,562,302
<p>I am working on a research project on text data (it's about search engine queries supervised classification). I have already implemented different methods and I have also used different models for the text (such as binary vectors of the dimention of my vocabulary - 1 if the i-th word appears in the text, 0 otherwise - or words embedding with the model word2vec).</p> <p>My advisor told me that maybe we could find another representation of the queries using Recurrent Neural Network. This representation should keep into account the sequentiality of the words in the text thanks to the recurrence relation. I have read some documentation about RNN but I haven't find anything useful for this goal. I have read lot of things about language modelling (which predict probabilities of the words), but I don't understand how I could adapt this model in order to obtain something like an embedded vector. </p> <p>Thank you very much!</p>
2016-07-25 07:49:34.973000+00:00
2016-08-09 18:56:11.827000+00:00
null
embedding|recurrent-neural-network
['http://arxiv.org/abs/1603.07012', 'https://stackoverflow.com/questions/38738821/how-does-word2vec-give-one-hot-word-vector-from-the-embedding-vector']
2
53,844,989
<p>I highly recommend you to use <a href="https://pypi.org/project/mss/" rel="noreferrer">MSS</a> instead of cv2 to capture the screen. cv2 is useful to handle image data, but no good at capturing. On the other hand, mss runs much faster than any other screen capture APIs. I used mss to apply object detection(<a href="https://arxiv.org/pdf/1612.08242.pdf" rel="noreferrer">YOLOv2</a>, <a href="https://github.com/thtrieu/darkflow" rel="noreferrer">darkflow</a>), and it run at over 40 frames per second. If it is used without any object detection, it should run at more fps. Here is the script:</p> <pre><code>import numpy as np import cv2 import glob from moviepy.editor import VideoFileClip from mss import mss from PIL import Image import time color = (0, 255, 0) # bounding box color. # This defines the area on the screen. mon = {'top' : 10, 'left' : 10, 'width' : 1000, 'height' : 800} sct = mss() previous_time = 0 while True : sct.get_pixels(mon) frame = Image.frombytes( 'RGB', (sct.width, sct.height), sct.image ) frame = np.array(frame) # image = image[ ::2, ::2, : ] # can be used to downgrade the input frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) cv2.imshow ('frame', frame) if cv2.waitKey ( 1 ) &amp; 0xff == ord( 'q' ) : cv2.destroyAllWindows() txt1 = 'fps: %.1f' % ( 1./( time.time() - previous_time )) previous_time = time.time() print txt1 </code></pre>
2018-12-19 05:17:23.787000+00:00
2018-12-19 05:17:23.787000+00:00
null
null
50,310,613
<p>So i have this code, which records my screen and saves it as output.avi but it only captures 10-15 frames per second. How do i make it capture around 50-60 frames at least. if i am not wrong cv2 is cpu based or something. how do i use gpu to do this task ?</p> <pre><code>import cv2 from PIL import ImageGrab import numpy as np fourcc = cv2.VideoWriter_fourcc('X','V','I','D') video = cv2.VideoWriter("output.avi",fourcc,8,(1920,1080)) while(True): image = ImageGrab.grab() image = np.array(image) frame = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) video.write(frame) key = cv2.waitKey(1) cv2.imshow("Hello",frame) if(key==27): break video.release() cv2.destroyAllWindows() </code></pre>
2018-05-12 20:58:34.717000+00:00
2021-09-11 01:13:51.190000+00:00
null
python|opencv
['https://pypi.org/project/mss/', 'https://arxiv.org/pdf/1612.08242.pdf', 'https://github.com/thtrieu/darkflow']
3
47,800,490
<p>More negative examples means more model-adjustment is happening, with each 'target' word training-example. So with more work being done, it's <em>plausible</em> that the model will improve somewhat for some purposes. </p> <p>But those extra calculations require more training time - so the value of increasing that parameter could be weighed against other choices, that also may offer improvement-at-the-cost-of-training-time. For example, increasing <code>window</code> or the number of training-iterations over the corpus also plausibly improve the model at cost of time.</p> <p>Interestingly, more negative examples tend to bias the coordinate-positions of most words, meaning the "cloud" of vectors isn't centered on the origin point. And, at least one recent paper has suggested a final step of removing this bias – transforming the final coordinates to restore a global average at the origin – can improve the word-vectors' utility on some tasks. </p> <p>Additionally, the original Word2Vec paper notes that with large corpuses, fewer negative examples may be sufficient or optimal. Section 2.2 of <a href="https://arxiv.org/abs/1310.4546" rel="nofollow noreferrer">'Distributed Representations of Words and Phrases and their Compositionality'</a> notes, "Our experiments indicate that values of k in the range 5–20 are useful for small training datasets, while for large datasets the k can be as small as 2–5." (I've even seen acceptable results, in a large corpus, with a single negative example.) </p> <p>So, it's worthwhile to experiment with different <code>negative</code> values, and some reasons to believe more examples can help, but it's not automatically a case of "more are better", and especially with larger corpuses, fewer negative examples may be sufficient or even optimal. </p>
2017-12-13 19:09:08.697000+00:00
2017-12-13 19:09:08.697000+00:00
null
null
47,785,599
<p>I am reading the paper</p> <p>Distributed Representations of Words and Phrases and their Compositionality.</p> <p>It is very interesting but I am really curious the relationship between the parameter 'negative' and the final performance. I personally think the final performance may become better as the increase of negative until some value. Because the more negative samples, which we are using to make the comparison, we should get better results theoretical. Of course, the performance will not become better until some points. Am I right?</p>
2017-12-13 04:41:03.187000+00:00
2017-12-13 19:09:08.697000+00:00
null
nlp|word2vec
['https://arxiv.org/abs/1310.4546']
1
52,290,910
<p>A drawback of SGD-based optimizers is that they rely upon scalar and uniform learning of gradients in all the directions (i.e., for all the parameters for which gradient is to be updated). In contrast, adaptive learning strategies such as Adam diagonally scale the gradient based upon estimates of the function’s curvature. Instead of maintaining a learning rate that is shared amongst multiple parameters, Adam uses a vector of learning rates, one for each parameter and adapts those as the training progresses. It is this non-uniform scaling of the gradients that results in the lag of generalization capabilites of Adam, and probably in your case, the massive decrease in accuracies.</p> <p>As mentioned in <a href="https://arxiv.org/pdf/1712.07628.pdf" rel="nofollow noreferrer">Improving Generalization Performance by Switching from Adam to SGD</a>:</p> <blockquote> <p>Despite superior training outcomes, adaptive optimization methods such as Adam, Adagrad or RMSprop have been found to generalize poorly compared to Stochastic gradient descent (SGD). These methods tend to perform well in the initial portion of training but are outperformed by SGD at later stages of training.</p> </blockquote> <p>In order to combine the best of both the optimizers, they introduce a switching technique from Adam to SGD by taking care of: (a) the switchover point, i.e. how long to train the model with Adam before switching to SGD. As a thumb rule, the paper points at switiching after at least 10 epochs. (b) the learning rate to be used for SGD after the switch: determined by the momentum parameter <code>beta_1</code> of Adam. </p> <p>A good explanation can be found <a href="https://github.com/kweonwooj/papers/issues/76" rel="nofollow noreferrer">here</a>.</p>
2018-09-12 08:37:41.213000+00:00
2018-09-12 08:37:41.213000+00:00
null
null
45,655,156
<p>I'm training a covnet on ~10,000 images and have noticed that switching the optimizer from <code>opt = SGD()</code> to <code>opt = 'adam'</code> leads to massive reduction in accuracies, keeping all else params equal. With SGD(), I get to about 80% accuracy (with gradual increases after each epoch). With Adam, I'm stuck at 22.25% validation accuracies at every epoch. </p> <p><strong>I want to understand what the likely cause for this is.</strong> </p> <p>Parameters</p> <pre><code>dropout_prob = 0.2 activation_function = 'relu' loss_function = 'categorical_crossentropy' batch_size = 32 epoch_count = 20 num_classes = 3 </code></pre> <p>Model</p> <pre><code> model = Sequential() model.add(Conv2D(filters=16, kernel_size=(3, 3), input_shape=inp_shape)) model.add(Conv2D(filters=32, kernel_size=(3, 3))) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(rate=dropout_prob)) model.add(Flatten()) model.add(Dense(128, activation=activation_function)) model.add(Dropout(rate=dropout_prob)) model.add(Dense(64, activation=activation_function)) model.add(Dropout(rate=dropout_prob)) model.add(Dense(32, activation=activation_function)) model.add(Dense(num_classes, activation='softmax')) model.summary() model.compile(loss=loss_function, optimizer=opt, metrics=['accuracy']) history = model.fit(x_train, y_train_cat, batch_size=batch_size, epochs=epoch_count, verbose=verbose_level, validation_data=(x_test, y_test_cat)) </code></pre>
2017-08-12 21:39:11.120000+00:00
2018-09-12 08:37:41.213000+00:00
2017-09-13 15:56:29.053000+00:00
neural-network|keras|conv-neural-network
['https://arxiv.org/pdf/1712.07628.pdf', 'https://github.com/kweonwooj/papers/issues/76']
2
12,582,522
<p>There are numerous research groups using relativistic codes, for all sorts of physics problems; from <a href="http://arxiv.org/abs/astro-ph/0202447" rel="nofollow noreferrer">Relativistic Electrodynamics</a>, <a href="http://arxiv.org/abs/0907.3647" rel="nofollow noreferrer">Relativistic Fluid Dynamics/Magnetohydrodynamics</a> and for gravitational based simulations etc. Astrophysical applications are the main place you would meet relativistic codes. </p> <p>A 4D game engine is what you already have in games like FIFA and COD. This is just a 3 + 1 implementation, which incedently is what many relativistic codes are (they use the 3 + 1 formulation of space-time). This splitting of space-time is much easier to handle computationally for many different reasons. Of course as you go from 1D to 2D etc. you complexity increases inline with the simulated physics.</p> <p>To me it makes no sense to have a physics engine in n-dimensions. We do not experience physical processes in n-diemensions, but four. To ask about hypercubes etc. is not physics but geometrical/mathematical constructs. These are separate from what you would traditionally associate with a physics engine.</p>
2012-09-25 11:59:55.260000+00:00
2019-06-07 08:02:42.090000+00:00
2019-06-07 08:02:42.090000+00:00
null
12,497,341
<p>Was somebody trying to implement 4d or n-dimension physics realtime (or not) engine?</p> <p>What difficulties in this implementation, compare to 3d and 2d physics engines? Of course, one of which is presentation problem. Is's an interesting to look at and to find out more about 4d hyperspheres, hypercubes, springs, joints, liquids and other objects.</p> <p>I am just curious, and not have a real application using it.</p> <p>Generalization of my idea is physics in lobachevskian or riemann geometries, distortion spaces (you can go through the needle's eye), looped spaces (returning to the same place), physics paradoxes and other amazing things.</p>
2012-09-19 14:57:09.490000+00:00
2020-10-19 21:50:26.167000+00:00
2013-11-30 16:26:26.263000+00:00
math|game-physics|physics-engine
['http://arxiv.org/abs/astro-ph/0202447', 'http://arxiv.org/abs/0907.3647']
2
69,210,567
<p>I know this is an old post but for anyone coming back to this:</p> <p>Adding to what Hai-Anh Trinh said, Transformers aren't 'bi-directional', it would be better to call them &quot;omni-directional&quot;. Because of their self-attention method, they are able to consider every single word at the same time, simultaneously.</p> <p>BERT on the other hand is &quot;deeply bidirectional&quot;. This is because of the masked language model(MLM) pre-training objective that is used in BERT. (there are a lot of resources online, I can link some if need be)</p> <p>It's easy to get confused so don't worry about it.</p> <p>(<a href="https://arxiv.org/pdf/1810.04805.pdf;" rel="nofollow noreferrer">https://arxiv.org/pdf/1810.04805.pdf;</a> link to the original BERT paper) (<a href="https://arxiv.org/pdf/1706.03762.pdf;" rel="nofollow noreferrer">https://arxiv.org/pdf/1706.03762.pdf;</a> link to the original Transformer paper)</p>
2021-09-16 14:44:45.443000+00:00
2021-09-16 14:44:45.443000+00:00
null
null
55,158,554
<p>I am coming from Google BERT context (Bidirectional Encoder representations from Transformers). I have gone through architecture and codes. People say this is <strong>bidirectional <em>by nature</em></strong>. To make it unidirectional attention some mask is to be applied. </p> <p>Basically a transformer takes key, values and queries as input; uses encoder decoder architecture; and applies attention to these keys, queries and values. What I understood is we need to pass tokens explicitly rather than transformer understanding this by nature. </p> <p>Can someone please explain <strong>what makes transformer is bidirectional by nature</strong></p>
2019-03-14 09:05:58.643000+00:00
2021-09-16 14:44:45.443000+00:00
null
machine-learning
['https://arxiv.org/pdf/1810.04805.pdf;', 'https://arxiv.org/pdf/1706.03762.pdf;']
2
31,310,229
<p>One of the easiest and most effective modern alternatives to edit distance is called the Normalized Compression Distance, or NCD. The basic idea is easy to explain. Choose a popular compressor that is implemented in your language such as <em>zlib</em>. Then, given string <em>A</em> and string <em>B</em>, let <em>C(A)</em> be the compressed size of <em>A</em> and <em>C(B)</em> be the compressed size of <em>B</em>. Let <em>AB</em> mean "<em>A</em> concatenated with <em>B</em>", so that <em>C(AB)</em> means "The compressed size of "<em>A</em> concatenated with <em>B</em>". Next, compute the fraction <p>(<em>C(AB)</em> - min(<em>C(A)</em>,<em>C(B)</em>)) / max(<em>C(A)</em>, <em>C(B)</em>) <p> This value is called NCD(<em>A</em>,<em>B</em>) and measures similarity similar to edit distance but supports more forms of similarity depending on which data compressor you choose. Certainly, zlib supports the "chunk" style similarity that you are describing. If two strings are similar the compressed size of the concatenation will be near the size of each alone so the numerator will be near 0 and the result will be near 0. If two strings are very dissimilar the compressed size together will be roughly the sum of the compressed sizes added and so the result will be near 1. This formula is much easier to implement than edit distance or almost any other explicit string similarity measure if you already have access to a data compression program like zlib. It is because most of the "hard" work such as heuristics and optimization has already been done in the data compression part and this formula simply extracts the amount of similar patterns it found using generic information theory that is agnostic to language. Moreover, this technique will be much faster than most explicit similarity measures (such as edit distance) for the few hundred byte size range you describe. For more information on this and a sample implementation just search Normalized Compression Distance (NCD) or have a look at the following paper and github project: <p> <a href="http://arxiv.org/abs/cs/0312044" rel="nofollow">http://arxiv.org/abs/cs/0312044</a> "Clustering by Compression" <p> <a href="https://github.com/rudi-cilibrasi/libcomplearn" rel="nofollow">https://github.com/rudi-cilibrasi/libcomplearn</a> C language implementation</p> <p>There are many other implementations and papers on this subject in the last decade that you may use as well in other languages and with modifications.</p>
2015-07-09 06:58:08.493000+00:00
2015-07-09 06:58:08.493000+00:00
null
null
878,114
<p>I put "chunk transposition" in quotes because I don't know whether or what the technical term should be. Just knowing if there is a technical term for the process would be very helpful.</p> <p>The <a href="http://en.wikipedia.org/wiki/Edit_distance" rel="noreferrer">Wikipedia article on edit distance</a> gives some good background on the concept.</p> <p>By taking "chunk transposition" into account, I mean that</p> <pre><code>Turing, Alan. </code></pre> <p>should match </p> <pre><code>Alan Turing </code></pre> <p>more closely than it matches</p> <pre><code>Turing Machine </code></pre> <p>I.e. the distance calculation should detect when substrings of the text have simply been moved within the text. This is not the case with the common Levenshtein distance formula.</p> <p>The strings will be a few hundred characters long at most -- they are author names or lists of author names which could be in a variety of formats. I'm not doing DNA sequencing (though I suspect people that do will know a bit about this subject).</p>
2009-05-18 14:44:34.297000+00:00
2015-07-09 06:58:08.493000+00:00
2009-05-18 17:19:44.750000+00:00
algorithm|language-agnostic|levenshtein-distance|edit-distance
['http://arxiv.org/abs/cs/0312044', 'https://github.com/rudi-cilibrasi/libcomplearn']
2
67,621,010
<p>Yes, it would be better to use anchors/default boxes with aspect-ratio similar to what you see in your data.</p> <p>For example, if you use TF Object Detection API, each model contains a config file with the different model configurations.<br /> i.e.: <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/configs/tf2/ssd_mobilenet_v2_320x320_coco17_tpu-8.config" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/research/object_detection/configs/tf2/ssd_mobilenet_v2_320x320_coco17_tpu-8.config</a></p> <pre><code> ssd_anchor_generator { num_layers: 6 min_scale: 0.2 max_scale: 0.95 aspect_ratios: 1.0 aspect_ratios: 2.0 aspect_ratios: 0.5 aspect_ratios: 3.0 aspect_ratios: 0.3333 } } </code></pre> <p>Usually the term aspect ration is referring to the result of <strong>width/height</strong><br /> So, in case you would have wanted only landscape-like objects you would keep only the aspect ratios bigger than 1 (2.0,3.0)</p> <p>Also, just for emphasizing this point, giving aspect-ratios similar to what you expect can be seen in the literature. For example - YOLOV3 article (<a href="https://arxiv.org/pdf/1804.02767.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1804.02767.pdf</a>) <a href="https://i.stack.imgur.com/JE58Y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JE58Y.png" alt="yolov3-choosing-default boxes" /></a></p> <p>In yolov3 - Redmond chose the anchors after analyzing the most probable object shapes in coco.</p>
2021-05-20 13:10:34.593000+00:00
2021-05-20 13:10:34.593000+00:00
null
null
67,608,361
<p>I want to train a deep learning model (say SSD or yolo) for object detection. The object I want to detect has very high aspect ratio, say a pencil. I want the output bounding box as close as possible to the object with similar aspect ratio. How should I optimize the model for this? Should I optimize all aspect ratio of pre-defined boxes to make them closer to the real object? For my case, the object is always in one orientation. Thanks</p>
2021-05-19 17:47:39.540000+00:00
2021-05-20 13:10:34.593000+00:00
null
deep-learning|object-detection|yolo
['https://github.com/tensorflow/models/blob/master/research/object_detection/configs/tf2/ssd_mobilenet_v2_320x320_coco17_tpu-8.config', 'https://arxiv.org/pdf/1804.02767.pdf', 'https://i.stack.imgur.com/JE58Y.png']
3
39,259,772
<p>Idea #1: Gradient clipping is often applied in RNNs. Here is an example of implementation: <a href="https://stackoverflow.com/questions/36498127/how-to-effectively-apply-gradient-clipping-in-tensor-flow">How to effectively apply gradient clipping in tensor flow?</a></p> <p>Idea #2: Using <a href="https://arxiv.org/pdf/1603.09025.pdf" rel="nofollow noreferrer">Recurrent Batch Normalization (arXiv)</a> (<a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">Batch Normalization</a>)</p> <p>Here is a Tensorflow implementation of a batch normalized LSTM cell: <a href="https://github.com/OlavHN/bnlstm/blob/master/lstm.py" rel="nofollow noreferrer">https://github.com/OlavHN/bnlstm/blob/master/lstm.py</a></p> <p>This implementation is explained in the article here : <a href="http://olavnymoen.com/2016/07/07/rnn-batch-normalization" rel="nofollow noreferrer">Batch normalized LSTM for Tensorflow</a></p>
2016-08-31 22:09:51.013000+00:00
2016-08-31 22:09:51.013000+00:00
2017-05-23 10:33:44.710000+00:00
null
35,323,023
<p>Perhaps a question better posed to Computer Science or Cross Validated?</p> <hr> <p>I'm beginning some work with LSTM on sequences of arbitrary length and one problem I'm experiencing and that I haven't seen addressed, is that my network seems to have developed a couple parameters that grow linearly (perhaps as a measure of time?).</p> <p>The obvious issue with this is that the training data is bounded at a sequence of length <code>x</code> and so the network grows this parameter reasonably up until tilmestep <code>x</code>. But after that, the network will eventually NAN because values are getting too extreme.</p> <p>Has anyone read anything about the normalization of stabilization of states over time?</p> <p>Any suggestions would be much appreciated.</p>
2016-02-10 18:22:27.867000+00:00
2016-08-31 22:09:51.013000+00:00
2016-02-10 20:42:05.343000+00:00
python-2.7|neural-network|tensorflow|lstm|recurrent-neural-network
['https://stackoverflow.com/questions/36498127/how-to-effectively-apply-gradient-clipping-in-tensor-flow', 'https://arxiv.org/pdf/1603.09025.pdf', 'https://arxiv.org/abs/1502.03167', 'https://github.com/OlavHN/bnlstm/blob/master/lstm.py', 'http://olavnymoen.com/2016/07/07/rnn-batch-normalization']
5
55,115,614
<p>You can not parallelize RNN in time (1000 here) because they are inherently sequential.</p> <p>You can use a light RNN, something like <a href="https://github.com/salesforce/pytorch-qrnn" rel="nofollow noreferrer">QRNN</a> or <a href="https://github.com/taolei87/sru" rel="nofollow noreferrer">SRU</a> as a faster alternative(which is still sequential).</p> <p>Another common sequence processing modules are <a href="https://github.com/locuslab/TCN/" rel="nofollow noreferrer">TCN</a> and <a href="https://arxiv.org/abs/1706.03762" rel="nofollow noreferrer">Transformers</a> which are both parallelizable in time.</p> <p>Also, note that all of them can be used with attention and work perfectly fine with text.</p>
2019-03-12 06:48:46.670000+00:00
2019-03-12 06:48:46.670000+00:00
null
null
44,351,134
<p>I am following along <a href="http://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html" rel="nofollow noreferrer">this pytorch tutorial</a> and trying to apply this principle to summarization, where the encoding sequence would be around 1000 words and decoder target 200 words.</p> <p>How do I apply <code>seq2seq</code> to this? I know it would be very expensive and almost infeasible to run through the whole sequence of 1000 words at once. So dividing the seq into say 20 seq and running in parallel could be an answer. But I'm not sure how to implement it; I also want to incorporate attention into it.</p>
2017-06-04 05:45:49.570000+00:00
2021-03-12 12:09:13.280000+00:00
2021-03-12 12:09:13.280000+00:00
python|lstm|summarization|pytorch
['https://github.com/salesforce/pytorch-qrnn', 'https://github.com/taolei87/sru', 'https://github.com/locuslab/TCN/', 'https://arxiv.org/abs/1706.03762']
4
34,644,608
<p>I haven't tested this code, but the only thing you need to change is to tell <strong><em>updates</em></strong> to use adam(..) instead of the updates already provided here, so something like this should work (complete code looks like this (we need to get rid of rmsprop stuff)):</p> <pre><code>import numpy as np import theano as theano import theano.tensor as T from theano.gradient import grad_clip import time import operator class GRUTheano(object): def __init__(self, word_dim, hidden_dim=128, bptt_truncate=-1): # Assign instance variables self.word_dim = word_dim self.hidden_dim = hidden_dim self.bptt_truncate = bptt_truncate # Initialize the network parameters E = np.random.uniform(-np.sqrt(1./word_dim), np.sqrt(1./word_dim), (hidden_dim, word_dim)) U = np.random.uniform(-np.sqrt(1./hidden_dim), np.sqrt(1./hidden_dim), (6, hidden_dim, hidden_dim)) W = np.random.uniform(-np.sqrt(1./hidden_dim), np.sqrt(1./hidden_dim), (6, hidden_dim, hidden_dim)) V = np.random.uniform(-np.sqrt(1./hidden_dim), np.sqrt(1./hidden_dim), (word_dim, hidden_dim)) b = np.zeros((6, hidden_dim)) c = np.zeros(word_dim) # Theano: Created shared variables self.E = theano.shared(name='E', value=E.astype(theano.config.floatX)) self.U = theano.shared(name='U', value=U.astype(theano.config.floatX)) self.W = theano.shared(name='W', value=W.astype(theano.config.floatX)) self.V = theano.shared(name='V', value=V.astype(theano.config.floatX)) self.b = theano.shared(name='b', value=b.astype(theano.config.floatX)) self.c = theano.shared(name='c', value=c.astype(theano.config.floatX)) # We store the Theano graph here self.theano = {} self.__theano_build__() def __theano_build__(self): E, V, U, W, b, c = self.E, self.V, self.U, self.W, self.b, self.c x = T.ivector('x') y = T.ivector('y') def forward_prop_step(x_t, s_t1_prev, s_t2_prev): # This is how we calculated the hidden state in a simple RNN. No longer! # s_t = T.tanh(U[:,x_t] + W.dot(s_t1_prev)) # Word embedding layer x_e = E[:,x_t] # GRU Layer 1 z_t1 = T.nnet.hard_sigmoid(U[0].dot(x_e) + W[0].dot(s_t1_prev) + b[0]) r_t1 = T.nnet.hard_sigmoid(U[1].dot(x_e) + W[1].dot(s_t1_prev) + b[1]) c_t1 = T.tanh(U[2].dot(x_e) + W[2].dot(s_t1_prev * r_t1) + b[2]) s_t1 = (T.ones_like(z_t1) - z_t1) * c_t1 + z_t1 * s_t1_prev # GRU Layer 2 z_t2 = T.nnet.hard_sigmoid(U[3].dot(s_t1) + W[3].dot(s_t2_prev) + b[3]) r_t2 = T.nnet.hard_sigmoid(U[4].dot(s_t1) + W[4].dot(s_t2_prev) + b[4]) c_t2 = T.tanh(U[5].dot(s_t1) + W[5].dot(s_t2_prev * r_t2) + b[5]) s_t2 = (T.ones_like(z_t2) - z_t2) * c_t2 + z_t2 * s_t2_prev # Final output calculation # Theano's softmax returns a matrix with one row, we only need the row o_t = T.nnet.softmax(V.dot(s_t2) + c)[0] return [o_t, s_t1, s_t2] [o, s, s2], updates = theano.scan( forward_prop_step, sequences=x, truncate_gradient=self.bptt_truncate, outputs_info=[None, dict(initial=T.zeros(self.hidden_dim)), dict(initial=T.zeros(self.hidden_dim))]) prediction = T.argmax(o, axis=1) o_error = T.sum(T.nnet.categorical_crossentropy(o, y)) # Total cost (could add regularization here) cost = o_error # Gradients dE = T.grad(cost, E) dU = T.grad(cost, U) dW = T.grad(cost, W) db = T.grad(cost, b) dV = T.grad(cost, V) dc = T.grad(cost, c) # Assign functions self.predict = theano.function([x], o) self.predict_class = theano.function([x], prediction) self.ce_error = theano.function([x, y], cost) self.bptt = theano.function([x, y], [dE, dU, dW, db, dV, dc]) self.params = [self.E, self.U, self.W, self.V, self.b, self.c] updates=adam(cost, self.params) self.sgd_step = theano.function( inputs=[x, y], outputs=[], updates=updates ) def calculate_total_loss(self, X, Y): return np.sum([self.ce_error(x,y) for x,y in zip(X,Y)]) def calculate_loss(self, X, Y): # Divide calculate_loss by the number of words num_words = np.sum([len(y) for y in Y]) return self.calculate_total_loss(X,Y)/float(num_words) def adam(loss, all_params, learning_rate=0.001, b1=0.9, b2=0.999, e=1e-8, gamma=1-1e-8): """ ADAM update rules Default values are taken from [Kingma2014] References: [Kingma2014] Kingma, Diederik, and Jimmy Ba. "Adam: A Method for Stochastic Optimization." arXiv preprint arXiv:1412.6980 (2014). http://arxiv.org/pdf/1412.6980v4.pdf """ updates = [] all_grads = theano.grad(loss, all_params) alpha = learning_rate t = theano.shared(np.float32(1)) b1_t = b1*gamma**(t-1) #(Decay the first moment running average coefficient) for theta_previous, g in zip(all_params, all_grads): m_previous = theano.shared(np.zeros(theta_previous.get_value().shape, dtype=theano.config.floatX)) v_previous = theano.shared(np.zeros(theta_previous.get_value().shape, dtype=theano.config.floatX)) m = b1_t*m_previous + (1 - b1_t)*g # (Update biased first moment estimate) v = b2*v_previous + (1 - b2)*g**2 # (Update biased second raw moment estimate) m_hat = m / (1-b1**t) # (Compute bias-corrected first moment estimate) v_hat = v / (1-b2**t) # (Compute bias-corrected second raw moment estimate) theta = theta_previous - (alpha * m_hat) / (T.sqrt(v_hat) + e) #(Update parameters) updates.append((m_previous, m)) updates.append((v_previous, v)) updates.append((theta_previous, theta) ) updates.append((t, t + 1.)) return updates </code></pre>
2016-01-06 23:11:08.657000+00:00
2016-01-06 23:11:08.657000+00:00
null
null
33,846,101
<p>Technical information:</p> <p>OS: Mac OS X 10.9.5</p> <p>IDE: Eclipse Mars.1 Release (4.5.1), with PyDev and Anaconda interpreter (grammar version 3.4)</p> <p>GPU: NVIDIA GeForce GT 650M</p> <p>Libs: numpy, aeosa, Sphinx-1.3.1, Theano 0.7, nltk-3.1</p> <p>My background: I am very new to theano and numpy and haven't taken a formal course in machine learning or discrete math.</p> <p>The recurrent neural network for natural language processing I currently use is taken from here:</p> <p><a href="https://github.com/dennybritz/rnn-tutorial-gru-lstm/blob/master/gru_theano.py" rel="nofollow">https://github.com/dennybritz/rnn-tutorial-gru-lstm/blob/master/gru_theano.py</a></p> <p>The only change made to this file is replacing references to <code>theano.config.floatX</code> with the string <code>'float32'</code>.</p> <p>I also use the utils.py and train.py modules included in the repository, with only minor changes.</p> <p>The adam optimizer I plan to incorporate in place of the sgd/rms code implemented in the example repository is found here: <a href="https://gist.github.com/skaae/ae7225263ca8806868cb" rel="nofollow">https://gist.github.com/skaae/ae7225263ca8806868cb</a></p> <p>Reproduced here (again with references to the <code>.config.floatX</code> replaced with the hard-coded <code>'float32'</code>):</p> <p>(<code>theano</code> as <code>th</code>, <code>theano.shared</code> as <code>thsh</code>, <code>theano.tensor</code> as <code>T</code>, <code>numpy</code> as <code>np</code>)</p> <pre><code>def adam(loss, all_params, learning_rate=0.001, b1=0.9, b2=0.999, e=1e-8, gamma=1-1e-8): """ ADAM update rules Default values are taken from [Kingma2014] References: [Kingma2014] Kingma, Diederik, and Jimmy Ba. "Adam: A Method for Stochastic Optimization." arXiv preprint arXiv:1412.6980 (2014). http://arxiv.org/pdf/1412.6980v4.pdf """ updates = [] all_grads = th.grad(loss, all_params) alpha = learning_rate t = thsh(np.float32(1)) b1_t = b1*gamma**(t-1) #(Decay the first moment running average coefficient) for theta_previous, g in zip(all_params, all_grads): m_previous = thsh(np.zeros(theta_previous.get_value().shape.astype('float32'))) v_previous = thsh(np.zeros(theta_previous.get_value().shape.astype('float32'))) m = b1_t*m_previous + (1 - b1_t)*g # (Update biased first moment estimate) v = b2*v_previous + (1 - b2)*g**2 # (Update biased second raw moment estimate) m_hat = m / (1-b1**t) # (Compute bias-corrected first moment estimate) v_hat = v / (1-b2**t) # (Compute bias-corrected second raw moment estimate) theta = theta_previous - (alpha * m_hat) / (T.sqrt(v_hat) + e) #(Update parameters) updates.append((m_previous, m)) updates.append((v_previous, v)) updates.append((theta_previous, theta) ) updates.append((t, t + 1.)) return updates </code></pre> <p><strong>My question</strong> is this:</p> <p>How would you modify the GRUTheano module to use the Adam method above in place of the builtin sgd/rmsprop function?</p> <p>It looks like the key changes would be to lines 99-126 of GRUTheano:</p> <pre><code> # SGD parameters learning_rate = T.scalar('learning_rate') decay = T.scalar('decay') # rmsprop cache updates mE = decay * self.mE + (1 - decay) * dE ** 2 mU = decay * self.mU + (1 - decay) * dU ** 2 mW = decay * self.mW + (1 - decay) * dW ** 2 mV = decay * self.mV + (1 - decay) * dV ** 2 mb = decay * self.mb + (1 - decay) * db ** 2 mc = decay * self.mc + (1 - decay) * dc ** 2 self.sgd_step = theano.function( [x, y, learning_rate, theano.Param(decay, default=0.9)], [], updates=[(E, E - learning_rate * dE / T.sqrt(mE + 1e-6)), (U, U - learning_rate * dU / T.sqrt(mU + 1e-6)), (W, W - learning_rate * dW / T.sqrt(mW + 1e-6)), (V, V - learning_rate * dV / T.sqrt(mV + 1e-6)), (b, b - learning_rate * db / T.sqrt(mb + 1e-6)), (c, c - learning_rate * dc / T.sqrt(mc + 1e-6)), (self.mE, mE), (self.mU, mU), (self.mW, mW), (self.mV, mV), (self.mb, mb), (self.mc, mc) ]) </code></pre>
2015-11-21 17:01:17.417000+00:00
2016-01-07 01:06:22.190000+00:00
2016-01-07 01:06:22.190000+00:00
python|neural-network|theano|gradient-descent|recurrent-neural-network
[]
0
49,137,774
<p><strong>Part 1: Is the batch normalization used in the right way?</strong><br><br> The way you've called the BatchNormalization layer is correct; axis=3 is what you want, as recommended by the documentation. Keep in mind that in the case of your model, axis=3 is equivalent to the default setting, axis=-1, so you do not need to call it explicitly.<br><br><strong>Part 2: In this case it is used after the activation function, right? Is there a possibility to use it before the activation function?</strong><br><br> Yes, batch normalization as defined in <a href="https://arxiv.org/pdf/1502.03167.pdf" rel="noreferrer">the 2014 research paper by Ioffe and Szegedy</a> is intended for use after the activation layer as a means of reducing internal covariate shift. Your code correctly applies the batchnorm after the activations on your convolutional layers. Its use after the activation layer can be thought of as a "pre-processing step" for the information before it reaches the next layer as an input.<br><br> For that reason, batch normalization can also serve as a data pre-processing step, which you can use immediately after your input layer (as discussed in <a href="https://stackoverflow.com/questions/46771939/batch-normalization-instead-of-input-normalization">this response</a>.) However, as that answer mentions, batchnorm should not be abused; it's computationally expensive and can force your model into approximately linear behavior (<a href="https://stats.stackexchange.com/questions/296680/input-layer-batch-normalization/299377#299377">this answer</a> goes into more detail about this issue).<br><br> Using batchnorm in some other step in the model (not after activation layer or input layer) would have poorly-understood effects on model performance; it's a process intended explicitly to be applied to the outputs of the activation layer.<br><br> In my experience with u-nets, I've had a lot of success applying batchnorm only after the convolutional layers before max pooling; this effectively doubles the computational "bang for my buck" on normalization, since these tensors are re-used in the u-net architecture. Aside from that, I don't use batchnorm (except maybe on the inputs if the mean pixel intensities per image are super heterogeneous.)</p>
2018-03-06 18:35:46.427000+00:00
2018-03-30 03:04:04.680000+00:00
2018-03-30 03:04:04.680000+00:00
null
46,316,687
<p>I am new to DL and Keras. Currently I try to implement a Unet-like CNN and now I want to include batch normalization layers into my non-sequential model but do not really now how.</p> <p>That is my current try to include it:</p> <pre><code>input_1 = Input((X_train.shape[1],X_train.shape[2], X_train.shape[3])) conv1 = Conv2D(16, (3,3), strides=(2,2), activation='relu', padding='same')(input_1) batch1 = BatchNormalization(axis=3)(conv1) conv2 = Conv2D(32, (3,3), strides=(2,2), activation='relu', padding='same')(batch1) batch2 = BatchNormalization(axis=3)(conv2) conv3 = Conv2D(64, (3,3), strides=(2,2), activation='relu', padding='same')(batch2) batch3 = BatchNormalization(axis=3)(conv3) conv4 = Conv2D(128, (3,3), strides=(2,2), activation='relu', padding='same')(batch3) batch4 = BatchNormalization(axis=3)(conv4) conv5 = Conv2D(256, (3,3), strides=(2,2), activation='relu', padding='same')(batch4) batch5 = BatchNormalization(axis=3)(conv5) conv6 = Conv2D(512, (3,3), strides=(2,2), activation='relu', padding='same')(batch5) drop1 = Dropout(0.25)(conv6) upconv1 = Conv2DTranspose(256, (3,3), strides=(1,1), padding='same')(drop1) upconv2 = Conv2DTranspose(128, (3,3), strides=(2,2), padding='same')(upconv1) upconv3 = Conv2DTranspose(64, (3,3), strides=(2,2), padding='same')(upconv2) upconv4 = Conv2DTranspose(32, (3,3), strides=(2,2), padding='same')(upconv3) upconv5 = Conv2DTranspose(16, (3,3), strides=(2,2), padding='same')(upconv4) upconv5_1 = concatenate([upconv5,conv2], axis=3) upconv6 = Conv2DTranspose(8, (3,3), strides=(2,2), padding='same')(upconv5_1) upconv6_1 = concatenate([upconv6,conv1], axis=3) upconv7 = Conv2DTranspose(1, (3,3), strides=(2,2), activation='linear', padding='same')(upconv6_1) model = Model(outputs=upconv7, inputs=input_1) </code></pre> <p>Is the batch normalization used in the right way? In the keras documentation I read that you typically want to normalize the "features axis"!? This is a short snippet out of the model summary:</p> <pre><code>==================================================================================================== input_1 (InputLayer) (None, 512, 512, 9) 0 ____________________________________________________________________________________________________ conv2d_1 (Conv2D) (None, 256, 256, 16) 1312 input_1[0][0] ____________________________________________________________________________________________________ conv2d_2 (Conv2D) (None, 128, 128, 32) 4640 conv2d_1[0][0] ____________________________________________________________________________________________________ conv2d_3 (Conv2D) (None, 64, 64, 64) 18496 conv2d_2[0][0] ____________________________________________________________________________________________________ </code></pre> <p>In this case my features axis is axis 3(start counting at 0), right? I read about discussions whether you should implement the batch normalization before or after the activation function. In this case it is used after the activation function, right? Is there a possibility to use it before the activation function?</p> <p>Thank you very much for your help and feedback! Really appreciate it!</p>
2017-09-20 08:12:21.857000+00:00
2019-08-01 02:16:16.583000+00:00
2017-09-20 08:36:40.983000+00:00
deep-learning|keras|conv-neural-network|batch-normalization|nonsequential
['https://arxiv.org/pdf/1502.03167.pdf', 'https://stackoverflow.com/questions/46771939/batch-normalization-instead-of-input-normalization', 'https://stats.stackexchange.com/questions/296680/input-layer-batch-normalization/299377#299377']
3
73,342,580
<p>It is difficult to understand what you are truly looking for, but I think I have a rough idea. I think you want to plot the survival curve that would have been observed if every person in your sample had received a specific value for the continuous covariate. If there is no confounding, you can simply use a Cox model that includes only the continuous covariate and use the <code>predict()</code> function for a range of points in time and plot the results. If you need to adjust for confounding, you can include the confounders in the Cox model and use <em>g-computation</em> to obtain the desired probabilities. I describe this in a recent preprint: <a href="https://arxiv.org/pdf/2208.04644.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2208.04644.pdf</a></p> <p>This can be done in R using the <code>contsurvplot</code> package (also developed by me). First, install the package using:</p> <pre><code>devtools::install_github(&quot;RobinDenz1/contsurvplot&quot;) </code></pre> <p>Afterwards, fit your Cox model, but use <code>x=TRUE</code> in the <code>coxph</code> call:</p> <pre><code>library(survival) library(contsurvplot) library(riskRegression) library(ggplot2) fit2 &lt;- coxph(Surv(stop, event) ~ size + rx, data=bladder, x=TRUE) </code></pre> <p>You can now call the <code>plot_surv_lines</code> function to obtain the causal survival curves for specific values of <code>size</code>, given the model. Using the <code>horizon</code> argument you can tell the function for which values you want to plot the survival curves. I choose the 20% and 80% quantile of <code>size</code> as you described:</p> <pre><code>plot_surv_lines(time=&quot;stop&quot;, status=&quot;event&quot;, variable=&quot;size&quot;, data=bladder, model=fit2, horizon=quantile(bladder$size, probs=c(0.2, 0.8))) </code></pre> <p><a href="https://i.stack.imgur.com/BV1lG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BV1lG.png" alt="example_lines" /></a> The package contains a lot more plotting routines to visualize the causal effect of a continuous variable on a time-to-event outcome that might be more suitable for what you actually want.</p>
2022-08-13 08:07:45.603000+00:00
2022-08-13 08:07:45.603000+00:00
null
null
57,759,759
<p>How can I plot predicted survival curves of a continuous covariate (let's say 20th and 80th percentile of the value) using the corrected group prognosis method as implemented in R by <a href="https://cran.r-project.org/web/packages/survival/vignettes/adjcurve.pdf" rel="nofollow noreferrer">Therneau</a></p> <p>For example, </p> <pre><code>library(survival) library(survminer) fit &lt;- coxph( Surv(stop, event) ~ size + strata(rx), data = bladder ) ggadjustedcurves(fit, data=bladder, method = "conditional", strata=rx) </code></pre> <p>Now, this is useful because I am given two survival curves that are stratified by rx (either 0 or 1) and the conditional method is being acted upon the bladder data set. However, let's say I would like to use the marginal method but <em>not stratify</em> and instead plot my continuous covariate at 20th and 80th value but also re-balance the subpopulation. Would like any step in the right direction.</p> <p>To re-state, I have a Cox model with continuous predictors. I would like to build a Cox model but not stratify on rx but have this in the model. Then, I want to pass the created Cox object into ggadjustedcurves() function with uses "subpopulation re-balancing" when given a reference data set. And then, instead of showing two survival curves stratified on a categorical variable, I want to plot two representative survival curves at the 20th and 80th percentile.</p> <p><strong>EDIT</strong></p> <p>My first attempt</p> <pre><code>fit2 &lt;- coxph( Surv(stop, event) ~ size + rx, data = bladder ) #remove strata fit2 # CGP pred&lt;- data.frame("rx" = 1, "size" = 3.2) ggadjustedcurves(fit2, data = pred , method = "conditional", reference = bladder) </code></pre> <p>Is this what I think it is? Conditional re-balancing has been applied to the reference data set and then the predicted curves are generated for an individual with rx=1 and size of 3.2.</p>
2019-09-02 15:47:02.663000+00:00
2022-08-13 08:07:45.603000+00:00
2019-09-02 17:03:20.167000+00:00
r|data-visualization|survival-analysis|cox-regression
['https://arxiv.org/pdf/2208.04644.pdf', 'https://i.stack.imgur.com/BV1lG.png']
2
42,867,744
<p><code>inception_v1.py</code> implements <a href="http://arxiv.org/pdf/1409.4842v1.pdf" rel="nofollow noreferrer">this</a> paper whereas <code>inception_v2.py</code> implements <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">Batch Normalization</a> paper, which is precisely what you notice.</p>
2017-03-17 22:00:35.637000+00:00
2017-03-17 22:00:35.637000+00:00
null
null
42,731,181
<p>We know that in inception v2 paper (<a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">Batch Normalization</a>), it add Batch Normalization before convolution layer to reduce internal covariate shift, and remove Local Response Normalization. But when I was studying <a href="https://github.com/tensorflow/models/blob/master/slim/nets/inception_v1.py" rel="nofollow noreferrer">inception_v1.py</a> and <a href="https://github.com/tensorflow/models/blob/master/slim/nets/inception_v2.py" rel="nofollow noreferrer">inception_v2.py</a>, I think these two model's code is almost same... In inception_2.py, I can't find Batch Normalization. For example: in inception_v1.py:</p> <pre><code>end_point = 'Mixed_3b' with tf.variable_scope(end_point): with tf.variable_scope('Branch_0'): branch_0 = slim.conv2d(net, 64, [1, 1], scope='Conv2d_0a_1x1') with tf.variable_scope('Branch_1'): branch_1 = slim.conv2d(net, 96, [1, 1], scope='Conv2d_0a_1x1') branch_1 = slim.conv2d(branch_1, 128, [3, 3], scope='Conv2d_0b_3x3') with tf.variable_scope('Branch_2'): branch_2 = slim.conv2d(net, 16, [1, 1], scope='Conv2d_0a_1x1') branch_2 = slim.conv2d(branch_2, 32, [3, 3], scope='Conv2d_0b_3x3') with tf.variable_scope('Branch_3'): branch_3 = slim.max_pool2d(net, [3, 3], scope='MaxPool_0a_3x3') branch_3 = slim.conv2d(branch_3, 32, [1, 1], scope='Conv2d_0b_1x1') net = tf.concat(3, [branch_0, branch_1, branch_2, branch_3]) </code></pre> <p>in inception_v2.py:</p> <pre><code>end_point = 'Mixed_3b' with tf.variable_scope(end_point): with tf.variable_scope('Branch_0'): branch_0 = slim.conv2d(net, depth(64), [1, 1], scope='Conv2d_0a_1x1') with tf.variable_scope('Branch_1'): branch_1 = slim.conv2d( net, depth(64), [1, 1], weights_initializer=trunc_normal(0.09), scope='Conv2d_0a_1x1') branch_1 = slim.conv2d(branch_1, depth(64), [3, 3], scope='Conv2d_0b_3x3') with tf.variable_scope('Branch_2'): branch_2 = slim.conv2d( net, depth(64), [1, 1], weights_initializer=trunc_normal(0.09), scope='Conv2d_0a_1x1') branch_2 = slim.conv2d(branch_2, depth(96), [3, 3], scope='Conv2d_0b_3x3') branch_2 = slim.conv2d(branch_2, depth(96), [3, 3], scope='Conv2d_0c_3x3') with tf.variable_scope('Branch_3'): branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3') branch_3 = slim.conv2d( branch_3, depth(32), [1, 1], weights_initializer=trunc_normal(0.1), scope='Conv2d_0b_1x1') net = tf.concat(3, [branch_0, branch_1, branch_2, branch_3]) </code></pre> <p>So, here is my question, what is the different between inception_v1.py and inception_v2.py? Thanks a lot!</p>
2017-03-11 03:58:27.867000+00:00
2017-03-17 22:00:35.637000+00:00
null
tensorflow
['http://arxiv.org/pdf/1409.4842v1.pdf', 'https://arxiv.org/abs/1502.03167']
2
50,591,219
<p>The problem you're referring to is called the matrix completion problem. For a library you can see <a href="https://github.com/tonyduan/matrix-completion" rel="nofollow noreferrer">here.</a> The method is called <a href="https://arxiv.org/abs/0810.3286" rel="nofollow noreferrer">singular value thresholding</a> or alternating least squares. An alternative implementation is <a href="http://www.quuxlabs.com/blog/2010/09/matrix-factorization-a-simple-tutorial-and-implementation-in-python/" rel="nofollow noreferrer">here.</a> The code is available <a href="http://www.quuxlabs.com/wp-content/uploads/2010/09/mf.py_.txt" rel="nofollow noreferrer">here</a></p>
2018-05-29 19:01:39.730000+00:00
2018-05-29 19:01:39.730000+00:00
null
null
50,371,056
<p>I'm actually in an intern ship at LIRIS (Computer Science Research Laboratory) and I work on recommender systems. My intern ship supervisor asks me to make a presentation about recommending movies thanks to SVD for tomorrow. So I learned about that.</p> <p>I think that I understood the mathematical part with the A = US(V^T) but some things are not really clear for me for the next step (recommend movies). I've found an amount of knowledge and it's not clear in my head :D</p> <p>I don't understand if the SVD calculates the numbers which are missing in the matrix A (predict ratings for users who haven't rate a film) or if we need a dense matrix A that we factorize in 3 matrices to recommend movies ?</p> <p>For the first case, how it works ? Because I've found nothing about that... For the second, how can 3 matrices can help us to recommend movies ? I don't understand the link between a decomposed matrix and recommend movies.</p> <p>I will be very thankful if someone can help me :)</p> <p>PS : sorry for the English, I'm a French student :D</p>
2018-05-16 12:32:14.677000+00:00
2018-05-29 19:01:39.730000+00:00
null
recommendation-engine|svd|recommendation-system
['https://github.com/tonyduan/matrix-completion', 'https://arxiv.org/abs/0810.3286', 'http://www.quuxlabs.com/blog/2010/09/matrix-factorization-a-simple-tutorial-and-implementation-in-python/', 'http://www.quuxlabs.com/wp-content/uploads/2010/09/mf.py_.txt']
4
69,154,047
<p>I was looking for an answer to the same question, and I came across <a href="https://arxiv.org/pdf/2003.05672.pdf" rel="nofollow noreferrer">the following paper from 2019</a> and the <a href="https://github.com/nla-group/ABBA-LSTM" rel="nofollow noreferrer">corresponding Git repo</a>. In particular, see section 5.3 in the paper. It seems like ABBA-LSTM is the solution, though it depends on the time series problem you're trying to solve.</p>
2021-09-12 18:06:58.590000+00:00
2021-09-12 18:06:58.590000+00:00
null
null
49,374,709
<p>I try to train a simple LSTM to predict the next number in a sequence (1,2,3,4,5 --> 6). </p> <pre class="lang-python prettyprint-override"><code>from keras.models import Sequential from keras.layers import LSTM, Dense from sklearn.model_selection import train_test_split import numpy as np import matplotlib.pyplot as plt xs = [[[(j+i)/100] for j in range(5)] for i in range(100)] ys = [(i+5)/100 for i in range(100)] x_train, x_test, y_train, y_test = train_test_split(xs, ys) model = Sequential() model.add(LSTM(1, input_shape=(5,1), return_sequences=True)) model.add(LSTM(1, return_sequences=False)) model.add(Dense(1)) model.compile(loss='mae', optimizer='adam', metrics=['accuracy']) training = model.fit(x_train, y_train, epochs=200) new_xs = np.array(xs)*5 new_ys = np.array(ys)*5 pred = model.predict(new_xs) plt.scatter(range(len(pred)), pred, c='r') plt.scatter(range(len(new_ys)), new_ys, c='b') </code></pre> <p>In order for the net to learn anything I had to normalize the training data (divided it by 100). It did work indeed for the data from the range it was trained on.</p> <p>I want it to be able to predict the numbers form outside the range it was trained on, but as soon as it leaves the range, it starts to diverge:</p> <p><a href="https://i.stack.imgur.com/dfNsq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dfNsq.png" alt="enter image description here"></a></p> <p>When I increased the number of units in both LSTM layers to 30 it looks a little better, but it's still diverging:</p> <p><a href="https://i.stack.imgur.com/Il3g7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Il3g7.png" alt="enter image description here"></a></p> <p>Is LSTM capable of learning that task without adding an infinite number of units?</p>
2018-03-20 00:48:17.473000+00:00
2021-09-12 18:06:58.590000+00:00
2018-03-20 09:28:36.583000+00:00
keras|lstm
['https://arxiv.org/pdf/2003.05672.pdf', 'https://github.com/nla-group/ABBA-LSTM']
2
47,682,413
<p>Well This Depend on the Problem, the form of dataset and Class of Unsupervised algorithm used to solve the particular problem.</p> <p>Roughly:- Dimensionality reduction techniques are usually tested by calculating the error in reconstruction so there we can use k-fold cross-validation procedure</p> <p>But on clustering algorithm, I would suggest doing statistical testing in order to test performance. There is also little time-consuming trick which splitting dataset and hand label the test set with meaningfull classes and cross validate</p> <p>In any case unsupervised algorithm is used on supervised data then it always good cross-validate</p> <p>overall:- It is not necessary to split data in the train-test set but if we can do it it is always better</p> <p>Here is article which explains how cross-validation is a good tool for unsupervised learning <a href="http://udini.proquest.com/view/cross-validation-for-unsupervised-pqid:1904931481/" rel="nofollow noreferrer">http://udini.proquest.com/view/cross-validation-for-unsupervised-pqid:1904931481/</a> and the full text is available here <a href="http://arxiv.org/pdf/0909.3052.pdf" rel="nofollow noreferrer">http://arxiv.org/pdf/0909.3052.pdf</a></p> <p><a href="https:///www.researchgate.net/post/Which_are_the_methods_to_validate_an_unsupervised_machine_learning_algorithm" rel="nofollow noreferrer">https:///www.researchgate.net/post/Which_are_the_methods_to_validate_an_unsupervised_machine_learning_algorithm</a></p>
2017-12-06 19:52:25.870000+00:00
2017-12-06 19:52:25.870000+00:00
null
null
31,673,388
<p>In supervised learning I have the typical train/test split to learn the algorithm, e.g. Regression or Classification. Regarding unsupervised learning, my question is: Is train/test split necessary and useful? If yes, why?</p>
2015-07-28 10:14:16.660000+00:00
2020-12-18 01:08:32.750000+00:00
null
machine-learning|unsupervised-learning
['http://udini.proquest.com/view/cross-validation-for-unsupervised-pqid:1904931481/', 'http://arxiv.org/pdf/0909.3052.pdf', 'https:///www.researchgate.net/post/Which_are_the_methods_to_validate_an_unsupervised_machine_learning_algorithm']
3
38,855,276
<p>For more information read journal paper named, "<a href="https://arxiv.org/pdf/1406.2419.pdf" rel="nofollow">Why do linear SVMs trained on HOG features perform so well?</a> " Hilton Bristow, Simon Lucey (2014)</p>
2016-08-09 15:49:32.027000+00:00
2016-08-09 15:49:32.027000+00:00
null
null
24,470,621
<p>Ok, almost all applications I have seen that use HoG features use linear svm as classifier. Can someone explain for me why linear svm are chosen and why they give good performance? </p> <p>Are linear svm chosen because it more simple and easier to train than svms that use polynomial or gaussian kernel and using these kernels is not giving significantly better performance? </p>
2014-06-28 20:21:34.760000+00:00
2016-08-09 15:49:32.027000+00:00
null
machine-learning|computer-vision|classification|object-detection
['https://arxiv.org/pdf/1406.2419.pdf']
1
23,501,303
<p>I find it worth mentioning <a href="http://www.nongnu.org/confuse/" rel="noreferrer">libConfuse</a> here, and quote its description:</p> <blockquote> <p>libConfuse is a configuration file parser library, licensed under the terms of the ISC license, and written in C. It supports sections and (lists of) values (strings, integers, floats, booleans or other sections), as well as some other features (such as single/double-quoted strings, environment variable expansion, functions and nested include statements). It makes it very easy to add configuration file capability to a program using a simple API.</p> <p>The goal of libConfuse is not to be the configuration file parser library with a gazillion of features. Instead, it aims to be easy to use and quick to integrate with your code. libConfuse was called libcfg before, but its name was changed to not confuse itself with other similar libraries.</p> </blockquote> <p>It seems fairly similar to the already mentioned libconfig. There is a short comparison of C and C++ parsers in <a href="http://arxiv.org/pdf/1103.3021.pdf?origin=publication_detail" rel="noreferrer">A study of the existing libraries to read from configuration files</a> that might be a useful start for anyone choosing among the alternatives.</p>
2014-05-06 17:33:41.437000+00:00
2019-11-04 12:22:59.233000+00:00
2020-06-20 09:12:55.060000+00:00
null
2,250,607
<p>Let's say I have a simple config file that my c program needs to read/parse.</p> <p>Let's say it looks a little bit like this:</p> <pre><code>#Some comment key1=data1 key2=data2 </code></pre> <p>Is there a standard c lib that I can use instead of writing my own parser?</p> <p>Thanks Johan</p> <hr> <p><em>Note</em>: Today I have my own little parser, but there must be some standard libs that solves this simple problem.</p>
2010-02-12 09:01:08.420000+00:00
2021-04-16 18:02:06.723000+00:00
null
c|linux
['http://www.nongnu.org/confuse/', 'http://arxiv.org/pdf/1103.3021.pdf?origin=publication_detail']
2
44,387,742
<p>This is a common problem with imbalanced datasets like the recently released Quora dataset which you are using. Since the Quora dataset is imbalanced (~63% negative and ~37% positive examples) you need proper initialization of weights. Without weight initialization your solution will be stuck in a local minima and it will train to predict only the negative class. Hence the 63% accuracy, because that is the percentage of 'not similar' questions in your validation data. If you check the results obtained on your validation set you will notice that it predicts all zeros. A truncated normal distribution proposed in He et al., <a href="http://arxiv.org/abs/1502.01852" rel="nofollow noreferrer">http://arxiv.org/abs/1502.01852</a> is a good alternate for initializing the weights.</p>
2017-06-06 10:38:38.777000+00:00
2017-06-06 10:38:38.777000+00:00
null
null
44,116,689
<p><strong>Dataset Description</strong></p> <p>The dataset contains a set of question pairs and a label which tells if the questions are same. e.g.</p> <blockquote> <p>"How do I read and find my YouTube comments?" , "How can I see all my Youtube comments?" , "1"</p> </blockquote> <p>The goal of the model is to identify if the given question pair is same or different.</p> <p><strong>Approach</strong></p> <p>I have created a <a href="https://www.quora.com/What-are-Siamese-neural-networks-what-applications-are-they-good-for-and-why" rel="nofollow noreferrer">Siamese network</a> to identify if two questions are same. Following is the model:</p> <pre><code>graph = tf.Graph() with graph.as_default(): embedding_placeholder = tf.placeholder(tf.float32, shape=embedding_matrix.shape, name='embedding_placeholder') with tf.variable_scope('siamese_network') as scope: labels = tf.placeholder(tf.int32, [batch_size, None], name='labels') keep_prob = tf.placeholder(tf.float32, name='question1_keep_prob') with tf.name_scope('question1') as question1_scope: question1_inputs = tf.placeholder(tf.int32, [batch_size, seq_len], name='question1_inputs') question1_embedding = tf.get_variable(name='embedding', initializer=embedding_placeholder, trainable=False) question1_embed = tf.nn.embedding_lookup(question1_embedding, question1_inputs) question1_lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) question1_drop = tf.contrib.rnn.DropoutWrapper(question1_lstm, output_keep_prob=keep_prob) question1_multi_lstm = tf.contrib.rnn.MultiRNNCell([question1_drop] * lstm_layers) q1_initial_state = question1_multi_lstm.zero_state(batch_size, tf.float32) question1_outputs, question1_final_state = tf.nn.dynamic_rnn(question1_multi_lstm, question1_embed, initial_state=q1_initial_state) scope.reuse_variables() with tf.name_scope('question2') as question2_scope: question2_inputs = tf.placeholder(tf.int32, [batch_size, seq_len], name='question2_inputs') question2_embedding = question1_embedding question2_embed = tf.nn.embedding_lookup(question2_embedding, question2_inputs) question2_lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) question2_drop = tf.contrib.rnn.DropoutWrapper(question2_lstm, output_keep_prob=keep_prob) question2_multi_lstm = tf.contrib.rnn.MultiRNNCell([question2_drop] * lstm_layers) q2_initial_state = question2_multi_lstm.zero_state(batch_size, tf.float32) question2_outputs, question2_final_state = tf.nn.dynamic_rnn(question2_multi_lstm, question2_embed, initial_state=q2_initial_state) </code></pre> <p>Calculate the cosine distance using the RNN outputs:</p> <pre><code>with graph.as_default(): diff = tf.sqrt(tf.reduce_sum(tf.square(tf.subtract(question1_outputs[:, -1, :], question2_outputs[:, -1, :])), reduction_indices=1)) margin = tf.constant(1.) labels = tf.to_float(labels) match_loss = tf.expand_dims(tf.square(diff, 'match_term'), 0) mismatch_loss = tf.expand_dims(tf.maximum(0., tf.subtract(margin, tf.square(diff)), 'mismatch_term'), 0) loss = tf.add(tf.matmul(labels, match_loss), tf.matmul((1 - labels), mismatch_loss), 'loss_add') distance = tf.reduce_mean(loss) optimizer = tf.train.AdamOptimizer(learning_rate).minimize(distance) </code></pre> <p>Following is the code to train the model:</p> <pre><code>with graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=graph) as sess: sess.run(tf.global_variables_initializer(), feed_dict={embedding_placeholder: embedding_matrix}) iteration = 1 for e in range(epochs): summary_writer = tf.summary.FileWriter('/Users/mithun/projects/kaggle/quora_question_pairs/logs', sess.graph) summary_writer.add_graph(sess.graph) for ii, (x1, x2, y) in enumerate(get_batches(question1_train, question2_train, label_train, batch_size), 1): feed = {question1_inputs: x1, question2_inputs: x2, labels: y[:, None], keep_prob: 0.9 } loss1 = sess.run([distance], feed_dict=feed) if iteration%5==0: print("Epoch: {}/{}".format(e, epochs), "Iteration: {}".format(iteration), "Train loss: {:.3f}".format(loss1)) if iteration%50==0: val_acc = [] for x1, x2, y in get_batches(question1_val, question2_val, label_val, batch_size): feed = {question1_inputs: x1, question2_inputs: x2, labels: y[:, None], keep_prob: 1 } batch_acc = sess.run([accuracy], feed_dict=feed) val_acc.append(batch_acc) print("Val acc: {:.3f}".format(np.mean(val_acc))) iteration +=1 saver.save(sess, "checkpoints/quora_pairs.ckpt") </code></pre> <p>I have trained the above model with about 10,000 labeled data. But, the accuracy is stagnant at around 0.630 and strangely the validation accuracy is same across all the iterations. </p> <pre><code>lstm_size = 64 lstm_layers = 1 batch_size = 128 learning_rate = 0.001 </code></pre> <p>Is there anything wrong with the way I have created the model? </p>
2017-05-22 15:22:29.790000+00:00
2017-06-06 10:38:38.777000+00:00
2017-05-25 15:31:53+00:00
tensorflow|lstm|recurrent-neural-network
['http://arxiv.org/abs/1502.01852']
1
45,809,077
<blockquote> <p>are any additional code changes needed to create a custom tier of any size?</p> </blockquote> <p>No; no changes are needed to the MNIST sample to get it to work with different number or type of worker. To use a <code>tf.estimator.Estimator</code> on CloudML engine, you must have your program invoke <code>learn_runner.run</code>, as <a href="https://github.com/GoogleCloudPlatform/cloudml-dist-mnist-example/blob/79f07aef969995f0e4445311b9771735fbd7173b/trainer/task.py#L118" rel="nofollow noreferrer">exemplified</a> in the samples. When you do so, the framework reads in the <a href="https://cloud.google.com/ml-engine/docs/concepts/trainer-considerations#use_tf_config" rel="nofollow noreferrer"><code>TF_CONFIG</code></a> environment variables and populates a <a href="https://www.tensorflow.org/api_docs/python/tf/estimator/RunConfig" rel="nofollow noreferrer"><code>RunConfig</code></a> object with the relevant information such as the <a href="https://www.tensorflow.org/api_docs/python/tf/train/ClusterSpec" rel="nofollow noreferrer"><code>ClusterSpec</code></a>. It will automatically do the right thing on Parameter Server nodes and it will use the provided Estimator to start training and evaluation.</p> <p>Most of the magic happens because <code>tf.estimator.Estimator</code> automatically uses a device setter that distributes ops correctly. That device setter uses the cluster information from the <code>RunConfig</code> object whose constructor, by default, uses TF_CONFIG to do its magic (e.g. <a href="https://github.com/tensorflow/tensorflow/blob/593dc8e5d65f4db93e8f5fced772abb3531a9752/tensorflow/python/estimator/estimator.py#L790" rel="nofollow noreferrer">here</a>). You can see where the device setter is being used <a href="https://github.com/tensorflow/tensorflow/blob/593dc8e5d65f4db93e8f5fced772abb3531a9752/tensorflow/python/estimator/estimator.py#L627" rel="nofollow noreferrer">here</a>.</p> <p>This all means that you can just change your <code>config.yaml</code> by adding/removing workers and/or changing their types and things should generally just work.</p> <p>For sample code using a custom model_fn, see the <a href="https://github.com/GoogleCloudPlatform/cloudml-samples/blob/master/census/customestimator/trainer/model.py" rel="nofollow noreferrer">census/customestimator</a> example.</p> <p>That said, please note that as you add workers, you are increasing your effective batch size (this is true regardless of whether or not you are using <code>tf.estimator</code>). That is, if your <code>batch_size</code> was 50 and you were using 10 workers, that means each worker is processing batches of size 50, for an effective batch size of 10*50=500. Then if you increase the number of workers to 20, your effective batch size becomes 20*50=1000. You may find that you may need to decrease your learning rate accordingly (linear seems to generally work well; <a href="https://arxiv.org/abs/1706.02677" rel="nofollow noreferrer">ref</a>).</p> <blockquote> <p>I poked around some of the other ML Engine samples and found that reddit_tft uses distributed training, but they appear to have defined their own runconfig.cluster_spec within their trainer package: task.pyeven though they are also using the Estimator API. So, is there any additional configuration needed?</p> </blockquote> <p>No additional configuration needed. The reddit_tft sample does instantiate its own <code>RunConfig</code>, however, the constructor of <code>RunConfig</code> grabs any properties not explicitly set during instantiation by using <code>TF_CONFIG</code>. And it does so only as a convenience to figure out how many Parameter Servers and workers there are.</p> <blockquote> <p>Does any of this change if the config.yaml specifies using GPUs?</p> </blockquote> <p>You should not need to change anything to use <code>tf.estimator.Estimator</code> with GPUs, other than possibly needing to manually assign ops to the GPU (but that's not specific to CloudML Engine); see <a href="https://www.tensorflow.org/tutorials/using_gpu" rel="nofollow noreferrer">this article</a> for more info. I will look into clarifying the documentation.</p>
2017-08-22 04:48:11.083000+00:00
2017-08-23 05:34:10.307000+00:00
2017-08-23 05:34:10.307000+00:00
null
45,783,285
<p>I completed this <a href="https://cloud.google.com/ml-engine/docs/tutorials/distributed-tensorflow-mnist-cloud-datalab" rel="nofollow noreferrer">tutorial</a> on distributed tensorflow experiments within an ML Engine experiment and I am looking to define my own custom tier instead of the <code>STANDARD_1</code> tier that they use in their <a href="https://github.com/GoogleCloudPlatform/cloudml-dist-mnist-example/blob/master/config/config.yaml" rel="nofollow noreferrer">config.yaml</a> file. If using the <code>tf.estimator.Estimator</code> API, are any additional code changes needed to create a custom tier of any size? For example, the article suggests: "If you distribute 10,000 batches among 10 worker nodes, each node works on roughly 1,000 batches." so this would suggest the config.yaml file below would be possible</p> <pre><code>trainingInput: scaleTier: CUSTOM masterType: complex_model_m workerType: complex_model_m parameterServerType: complex_model_m workerCount: 10 parameterServerCount: 4 </code></pre> <p>Are any code changes needed to the mnist tutorial to be able to use this custom configuration? Would this distribute the X number of batches across the 10 workers as the tutorial suggests would be possible? I poked around some of the other ML Engine samples and found that reddit_tft uses distributed training, but they appear to have defined their own <code>runconfig.cluster_spec</code> within their trainer package: <a href="https://github.com/GoogleCloudPlatform/cloudml-samples/blob/master/reddit_tft/trainer/task.py#L205" rel="nofollow noreferrer">task.py</a>even though they are also using the Estimator API. So, is there any additional configuration needed? My current understanding is that if using the Estimator API (even within your own defined model) that there should not need to be any additional changes.</p> <p>Does any of this change if the config.yaml specifies using GPUs? This <a href="https://cloud.google.com/ml-engine/docs/how-tos/using-gpus" rel="nofollow noreferrer">article</a> suggests for the Estimator API "No code changes are necessary as long as your ClusterSpec is configured properly. If a cluster is a mixture of CPUs and GPUs, map the ps job name to the CPUs and the worker job name to the GPUs." However, since the config.yaml is specifically identifying the machine type for parameter servers and workers, I am expecting that within ML-Engine the ClusterSpec will be configured properly based on the config.yaml file. However, I am not able to find any ml-engine documentation that confirms no changes are needed to take advantage of GPUs. </p> <p>Last, within ML-Engine I am wondering if there are any ways to identify usage of different configurations? The line "If you distribute 10,000 batches among 10 worker nodes, each node works on roughly 1,000 batches." suggests that the use of additional workers would be roughly linear, but I don't have any intuition around how to determine if more parameter servers are needed? What would one be able to check (either within the cloud dashboards or tensorboard) to determine if they have a sufficient number of parameter servers?</p>
2017-08-20 14:39:00.130000+00:00
2017-08-23 05:34:10.307000+00:00
null
google-cloud-platform|tensorflow|google-cloud-ml-engine
['https://github.com/GoogleCloudPlatform/cloudml-dist-mnist-example/blob/79f07aef969995f0e4445311b9771735fbd7173b/trainer/task.py#L118', 'https://cloud.google.com/ml-engine/docs/concepts/trainer-considerations#use_tf_config', 'https://www.tensorflow.org/api_docs/python/tf/estimator/RunConfig', 'https://www.tensorflow.org/api_docs/python/tf/train/ClusterSpec', 'https://github.com/tensorflow/tensorflow/blob/593dc8e5d65f4db93e8f5fced772abb3531a9752/tensorflow/python/estimator/estimator.py#L790', 'https://github.com/tensorflow/tensorflow/blob/593dc8e5d65f4db93e8f5fced772abb3531a9752/tensorflow/python/estimator/estimator.py#L627', 'https://github.com/GoogleCloudPlatform/cloudml-samples/blob/master/census/customestimator/trainer/model.py', 'https://arxiv.org/abs/1706.02677', 'https://www.tensorflow.org/tutorials/using_gpu']
9
6,377,601
<p>Even a 4 x 4 correlation matrix is sensitive to errors. In any case, here are some links that might help:</p> <p><a href="http://www.oxford-man.ox.ac.uk/documents/papers/2011OMI08_Sheppard.pdf" rel="nofollow">http://www.oxford-man.ox.ac.uk/documents/papers/2011OMI08_Sheppard.pdf</a></p> <p><a href="http://www.kevinsheppard.com/images/4/47/Chapter8.pdf" rel="nofollow">http://www.kevinsheppard.com/images/4/47/Chapter8.pdf</a></p> <p><a href="http://arxiv.org/PS_cache/arxiv/pdf/1009/1009.5331v1.pdf" rel="nofollow">http://arxiv.org/PS_cache/arxiv/pdf/1009/1009.5331v1.pdf</a></p> <p><a href="http://cran.r-project.org/web/packages/tawny/index.html" rel="nofollow">http://cran.r-project.org/web/packages/tawny/index.html</a></p> <p><a href="http://www.rinfinance.com/RinFinance2009/presentations/yollin_slides.pdf" rel="nofollow">http://www.rinfinance.com/RinFinance2009/presentations/yollin_slides.pdf</a></p> <p><a href="http://nurometic.com/quantitative-finance/tawny/portfolio-optimization-with-tawny" rel="nofollow">http://nurometic.com/quantitative-finance/tawny/portfolio-optimization-with-tawny</a></p> <p><a href="http://quantivity.wordpress.com/2011/04/17/minimum-variance-portfolios/" rel="nofollow">http://quantivity.wordpress.com/2011/04/17/minimum-variance-portfolios/</a></p>
2011-06-16 19:41:39.380000+00:00
2011-06-16 19:41:39.380000+00:00
null
null
6,377,016
<p>the correlation matrix is so large (50000by50000) that it is not efficient in calculating what I want. What I want to do is to break it down to groups and treat each as separate correlation matrices. However, how do I deal with the dependence between those smaller correlation matrices? I have been researching online all day but nothing comes up. There should be some algorithm out there that is related to the approximation of large correlation matrices like this, right?</p>
2011-06-16 18:51:20.123000+00:00
2011-06-16 19:41:39.380000+00:00
null
r|large-data-volumes|correlation|approximation
['http://www.oxford-man.ox.ac.uk/documents/papers/2011OMI08_Sheppard.pdf', 'http://www.kevinsheppard.com/images/4/47/Chapter8.pdf', 'http://arxiv.org/PS_cache/arxiv/pdf/1009/1009.5331v1.pdf', 'http://cran.r-project.org/web/packages/tawny/index.html', 'http://www.rinfinance.com/RinFinance2009/presentations/yollin_slides.pdf', 'http://nurometic.com/quantitative-finance/tawny/portfolio-optimization-with-tawny', 'http://quantivity.wordpress.com/2011/04/17/minimum-variance-portfolios/']
7
67,147,202
<p>It is not possible. In the past people trained several networks for different scales but the current state-of-the-art approach is feature pyramids.</p> <p><a href="https://arxiv.org/pdf/1612.03144.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1612.03144.pdf</a></p> <p>Another great candidate is to use dilated convolution which can learn long distance dependencies among pixels with varying distance. You can concatenate the outputs of them and the model will then learn which distance is important for which case</p> <p><a href="https://towardsdatascience.com/review-dilated-convolution-semantic-segmentation-9d5a5bd768f5" rel="nofollow noreferrer">https://towardsdatascience.com/review-dilated-convolution-semantic-segmentation-9d5a5bd768f5</a></p>
2021-04-18 09:46:38.853000+00:00
2021-04-18 09:46:38.853000+00:00
null
null
67,109,080
<p>I am training a yolov4 (fully convolutional) in tensorflow 2.3.0.</p> <p>I would like to change the spatial input shape of the network during training, to further adjust the weights to different scales.</p> <p>Is this possible?</p> <p>EDIT: I know of the existence of darknet, but it suffers from some very specific augmentations I use and have implemented in my repo, that is why I ask explicitly for tensorflow.</p> <p>To be more precisely about what I want to do.</p> <p>I want to train for several batches at <code>Y1xX1xC</code> then change the input size to <code>Y2xX2xC</code> and train again for several batches and so on.</p>
2021-04-15 13:07:28.693000+00:00
2021-04-19 07:28:34.160000+00:00
2021-04-19 07:28:34.160000+00:00
tensorflow
['https://arxiv.org/pdf/1612.03144.pdf', 'https://towardsdatascience.com/review-dilated-convolution-semantic-segmentation-9d5a5bd768f5']
2
53,959,991
<p>It is really a broad question, asking for answers relying mostly on opinions. Here are my two cents though, which you might find interesting as it does not go along the previous answers here and on datascience.</p> <p>First, I wouldn't go with separate columns for each input. AFAIK, when different inputs are processed by different columns, it is almost always the case that the network is some sort of Siemese network and the columns share the same weights; or at least the columns all need to produce a similar code. It is not your case here, so I would simply not bother.</p> <p>Second, you are blessed with a problem with a dense output <em>and</em> no need to learn a code. This should direct you straight to <a href="https://arxiv.org/abs/1505.04597" rel="nofollow noreferrer">U-nets</a>, which outperforms any bottleneck-designed network without much effort. U-nets were introduced for dense segmentation but they shine at any dense-output problem really.</p> <p>In short, just stack your inputs together and use a U-net.</p>
2018-12-28 14:23:25.790000+00:00
2018-12-28 14:23:25.790000+00:00
null
null
53,844,318
<p>I have a set of 2D input arrays <code>m x n</code> namely <code>A,B,C</code> and I have to predict two 2D output arrays namely <code>d,e</code> for which I do have the expected values. You can think of the inputs/outputs as grey images if you like.</p> <p>Because of the spatial information is relevant (these are actually 2D physical domains) I want to use a Convolutional Neural Network to predict <code>d</code> and <code>e</code>. My design (not tested yet) looks as follows:</p> <p><a href="https://i.stack.imgur.com/BnDZv.png" rel="noreferrer"><img src="https://i.stack.imgur.com/BnDZv.png" alt="enter image description here"></a></p> <p>Because I have multiple inputs, I guess I should use multiple columns (or branches) to find different features for each of the inputs (they look fairly different). Each of these columns follows a encoding-decoding architecture used in segmentation (see SegNet): Conv2D block involves a convolution+batch normalisation+ReLU layer. Deconv2D involves a deconvolution+batch normalisation+ReLU.</p> <p>Then, I can merge the output of each column by either concatenating, averaging or taking the maximum for example. To obtain the original <code>m x n</code> shape for each of the outputs I have seen I could do this with a <code>1 x 1</code> kernel convolution.</p> <p>I want to predict the two outputs from that single layer. Is that okay from the network structure point of view? Finally my loss function depends on the outputs themselves compared to the target plus another relation I want to impose.</p> <p>A would like to have some expert opinion on this since this is my first design of a CNN and I am not sure if I it makes sense as it is now and/or if there are better approaches (or network architectures) to this problem.</p> <p>I posted this originally in <a href="https://datascience.stackexchange.com/questions/42798/multiple-input-multiple-output-cnn-with-custom-loss-function">datascience</a> but I did not get much feedback. I am now posting it here since there is a bigger community on these topics plus I would be very grateful to receive implementation tips beside network architectural ones. Thanks.</p>
2018-12-19 03:48:15.493000+00:00
2018-12-28 14:23:25.790000+00:00
2018-12-19 07:20:11.837000+00:00
tensorflow|keras|neural-network|deep-learning|conv-neural-network
['https://arxiv.org/abs/1505.04597']
1
70,400,656
<p>Section 2.12 of <em>Accurate Throughput Prediction of Basic Blocks on Recent Intel Microarchitectures</em>[^1] explains how port are assigned, though it fails to explain example 4 in the question description. I also failed to figure out what role Latency plays in the port assignment.</p> <blockquote> <p>Previous work [19, 25, 26] has identified the ports that the µops of individual instructions can use. For µops that can use more than one port, it was, however, previously unknown how the actual port is chosen by the processor. We reverse-engineered the port assignment algorithm using microbenchmarks. In the following, we describe our findings for CPUs with eight ports; such CPUs are currently most widely used.</p> <p>The ports are assigned when the µops are issued by the renamer to the scheduler. In a single cycle, up to four µops can be issued. In the following, we will call the position of a µop within a cycle an issue slot; e.g., the oldest instruction issued in a cycle would occupy issue slot 0.</p> <p><strong>The port that a µop is assigned depends on its issue slot and on the ports assigned to µops that have not been executed and were issued in a previous cycle.</strong></p> <p>In the following, we will only consider µops that can use more than one port. For a given µop m, let $P_{min}$ be the port to which the fewest non-executed µops have been assigned to from among the ports that m can use. Let $P_{min'}$ be the port with the second smallest usage so far. If there is a tie among the ports with the smallest (or second smallest, respectively) usage, let $P_{min}$ (or $P_{min'}$) be the port with the highest port number from among these ports (the reason for this choice is probably that ports with higher numbers are connected to fewer functional units). If the difference between $P_{min}$ and $P_{min'}$ is greater or equal to 3, we set $P_{min'}$ to $P_{min}$.</p> <p>The µops in issue slots 0 and 2 are assigned to port $P_{min}$ The µops in issue slots 1 and 3 are assigned to port $P_{min'}$.</p> <p>A special case is µops that can use port 2 and port 3. These ports are used by µops that handle memory accesses, and both ports are connected to the same types of functional units. For such µops, the port assignment algorithm alternates between port 2 and port 3.</p> </blockquote> <p>I tried to find out whether $P_{min}$ and $P_{min'}$ are shared between threads (Hyper-Threading), namely <strong>whether one thread can affect the port assignment of another one in the same core.</strong></p> <p>Just split the code used in BeeOnRope's answer into two threads.</p> <pre><code>thread1: .loop: imul rax, rbx, 5 jmp .loop thread2: mov esi,1000000000 .top: bswap eax dec esi jnz .top jmp thread2 </code></pre> <p>Where instructions <code>bswap</code> can be executed on ports 1 and 5, and <code>imul r64, R64, i</code> on port 1. If counters were shared between threads, you would see <code>bswap</code> executed on port 5 and <code>imul</code> executed on port 1.</p> <p>The experiment was recorded as follows, where ports P0 and P5 on thread 1 and p0 on thread 2 should have recorded a small amount of non-user data, but without hindering the conclusion. It can be seen from the data that the <code>bswap</code> instruction of thread 2 is executed alternately between ports P1 and P5 without giving up P1.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>port</th> <th>thread 1 active cycles</th> <th>thread 2 active cycles</th> </tr> </thead> <tbody> <tr> <td>P0</td> <td>63,088,967</td> <td>68,022,708</td> </tr> <tr> <td>P1</td> <td>180,219,013,832</td> <td>95,742,764,738</td> </tr> <tr> <td>P5</td> <td>63,994,200</td> <td>96,291,124,547</td> </tr> <tr> <td>P6</td> <td>180,330,835,515</td> <td>192,048,880,421</td> </tr> <tr> <td>total</td> <td>180,998,504,099</td> <td>192,774,759,297</td> </tr> </tbody> </table> </div> <p>Therefore, the counters are not shared between threads.</p> <p>This conclusion does not conflict with SMotherSpectre[^2], which uses time as the side channel. (For example, thread 2 waits longer on port 1 to use port 1.)</p> <blockquote> <p>Executing instructions that occupy a specific port and measuring their timing enables inference about other instructions executing on the same port. We first choose two instructions, each scheduled on a single, distinct, execution port. One thread runs and times a long sequence of single µop instructions scheduled on port a, while simultaneously the other thread runs a long sequence of instructions scheduled on port b. We expect that, if a = b, contention occurs and the measured execution time is longer compared to the a ≠ b case.</p> </blockquote> <hr /> <p>[^1]: Abel, Andreas, and Jan Reineke. &quot;Accurate Throughput Prediction of Basic Blocks on Recent Intel Microarchitectures.&quot; arXiv preprint arXiv:2107.14210 (2021).</p> <p>[^2]: Bhattacharyya, Atri, Alexandra Sandulescu, Matthias Neugschwandtner, Alessandro Sorniotti, Babak Falsafi, Mathias Payer, and Anil Kurmus. “SMoTherSpectre: Exploiting Speculative Execution through Port Contention.” Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, November 6, 2019, 785–800. <a href="https://doi.org/10.1145/3319535.3363194" rel="nofollow noreferrer">https://doi.org/10.1145/3319535.3363194</a>.</p>
2021-12-18 02:52:33.067000+00:00
2021-12-18 03:12:08.223000+00:00
2021-12-18 03:12:08.223000+00:00
null
40,681,331
<p>Modern x86 CPUs break down the incoming instruction stream into micro-operations (uops<sup>1</sup>) and then schedule these uops <a href="https://en.wikipedia.org/wiki/Out-of-order_execution" rel="noreferrer">out-of-order</a> as their inputs become ready. While the basic idea is clear, I'd like to know the specific details of <em>how</em> ready instructions are scheduled, since it impacts micro-optimization decisions.</p> <p>For example, take the following toy loop<sup>2</sup>:</p> <pre><code>top: lea eax, [ecx + 5] popcnt eax, eax add edi, eax dec ecx jnz top </code></pre> <p>this basically implements the loop (with the following correspondence: <code>eax -&gt; total, c -&gt; ecx</code>):</p> <pre><code>do { total += popcnt(c + 5); } while (--c &gt; 0); </code></pre> <p>I'm familiar with the process of optimizing any small loop by looking at the uop breakdown, dependency chain latencies and so on. In the loop above we have only one carried dependency chain: <code>dec ecx</code>. The first three instructions of the loop (<code>lea</code>, <code>popcnt</code>, <code>add</code>) are part of a dependency chain that starts fresh each loop.</p> <p>The final <code>dec</code> and <code>jne</code> are fused. So we have a total of 4 fused-domain uops, and one only loop-carried dependency chain with a latency of 1 cycle. So based on that criteria, it seems that the loop can execute at 1 cycle/iteration.</p> <p>However, we should look at the port pressure too:</p> <ul> <li>The <code>lea</code> can execute on ports 1 and 5</li> <li>The popcnt can execute on port 1</li> <li>The <code>add</code> can execute on port 0, 1, 5 and 6</li> <li>The predicted-taken <code>jnz</code> executes on port 6</li> </ul> <p>So to get to 1 cycle / iteration, you pretty much need the following to happen:</p> <ul> <li>The popcnt <em>must</em> execute on port 1 (the only port it can execute on)</li> <li>The <code>lea</code> <em>must</em> execute on port 5 (and never on port 1)</li> <li>The <code>add</code> <em>must</em> execute on port 0, and never on any of other three ports it can execute on</li> <li>The <code>jnz</code> can only execute on port 6 anyway</li> </ul> <p>That's a lot of conditions! If instructions just got scheduled randomly, you could get a much worse throughput. For example, 75% the <code>add</code> would go to port 1, 5 or 6, which would delay the <code>popcnt</code>, <code>lea</code> or <code>jnz</code> by one cycle. Similarly for the <code>lea</code> which can go to 2 ports, one shared with <code>popcnt</code>.</p> <p>IACA on the other hand reports a result very close to optimal, 1.05 cycles per iteration:</p> <pre><code>Intel(R) Architecture Code Analyzer Version - 2.1 Analyzed File - l.o Binary Format - 64Bit Architecture - HSW Analysis Type - Throughput Throughput Analysis Report -------------------------- Block Throughput: 1.05 Cycles Throughput Bottleneck: FrontEnd, Port0, Port1, Port5 Port Binding In Cycles Per Iteration: --------------------------------------------------------------------------------------- | Port | 0 - DV | 1 | 2 - D | 3 - D | 4 | 5 | 6 | 7 | --------------------------------------------------------------------------------------- | Cycles | 1.0 0.0 | 1.0 | 0.0 0.0 | 0.0 0.0 | 0.0 | 1.0 | 0.9 | 0.0 | --------------------------------------------------------------------------------------- N - port number or number of cycles resource conflict caused delay, DV - Divider pipe (on port 0) D - Data fetch pipe (on ports 2 and 3), CP - on a critical path F - Macro Fusion with the previous instruction occurred * - instruction micro-ops not bound to a port ^ - Micro Fusion happened # - ESP Tracking sync uop was issued @ - SSE instruction followed an AVX256 instruction, dozens of cycles penalty is expected ! - instruction not supported, was not accounted in Analysis | Num Of | Ports pressure in cycles | | | Uops | 0 - DV | 1 | 2 - D | 3 - D | 4 | 5 | 6 | 7 | | --------------------------------------------------------------------------------- | 1 | | | | | | 1.0 | | | CP | lea eax, ptr [ecx+0x5] | 1 | | 1.0 | | | | | | | CP | popcnt eax, eax | 1 | 0.1 | | | | | 0.1 | 0.9 | | CP | add edi, eax | 1 | 0.9 | | | | | | 0.1 | | CP | dec ecx | 0F | | | | | | | | | | jnz 0xfffffffffffffff4 </code></pre> <p>It pretty much reflects the necessary &quot;ideal&quot; scheduling I mentioned above, with a small deviation: it shows the <code>add</code> stealing port 5 from the <code>lea</code> on 1 out of 10 cycles. It also doesn't know that the fused branch is going to go to port 6 since it is predicted taken, so it puts most of the uops for the branch on port 0, and most of the uops for the <code>add</code> on port 6, rather than the other way around.</p> <p>It's not clear if the extra 0.05 cycles that IACA reports over the optimal is the result of some deep, accurate analysis, or a less insightful consequence of the algorithm it uses, e.g., analyzing the loop over a fixed number of cycles, or just a bug or whatever. The same goes for the 0.1 fraction of a uop that it thinks will go to the non-ideal port. It is also not clear if one explains the other - I would think that mis-assigning a port 1 out of 10 times would cause a cycle count of 11/10 = 1.1 cycles per iteration, but I haven't worked out the actual downstream results - maybe the impact is less on average. Or it could just be rounding (0.05 == 0.1 to 1 decimal place).</p> <p>So how do modern x86 CPUs actually schedule? In particular:</p> <ol> <li>When multiple uops are <em>ready</em> in the reservation station, in what order are they scheduled to ports?</li> <li>When a uop can go to multiple ports (like the <code>add</code> and <code>lea</code> in the example above), how is it decided which port is chosen?</li> <li>If any of the answers involve a concept like <em>oldest</em> to choose among uops, how is it defined? Age since it was delivered to the RS? Age since it became ready? How are ties broken? Does program order ever come into it?</li> </ol> <h1>Results on Skylake</h1> <p>Let's measure some actual results on Skylake to check which answers explain the experimental evidence, so here are some real-world measured results (from <code>perf</code>) on my Skylake box. Confusingly, I'm going switch to using <code>imul</code> for my &quot;only executes on one port&quot; instruction, since it has many variants, including 3-argument versions that allow you to use different registers for the source(s) and destination. This is very handy when trying to construct dependency chains. It also avoids the whole &quot;incorrect dependency on destination&quot; that <code>popcnt</code> has.</p> <h2>Independent Instructions</h2> <p>Let's start by looking at the simple (?) case that the instructions are relatively independent - without any dependency chains other than trivial ones like the loop counter.</p> <p>Here's a 4 uop loop (only 3 executed uops) with mild pressure. All instructions are independent (don't share any sources or destinations). The <code>add</code> could in principle steal the <code>p1</code> needed by the <code>imul</code> or <code>p6</code> needed by the dec:</p> <h3>Example 1</h3> <pre><code>instr p0 p1 p5 p6 xor (elim) imul X add X X X X dec X top: xor r9, r9 add r8, rdx imul rax, rbx, 5 dec esi jnz top The results is that this executes with perfect scheduling at 1.00 cycles / iteration: 560,709,974 uops_dispatched_port_port_0 ( +- 0.38% ) 1,000,026,608 uops_dispatched_port_port_1 ( +- 0.00% ) 439,324,609 uops_dispatched_port_port_5 ( +- 0.49% ) 1,000,041,224 uops_dispatched_port_port_6 ( +- 0.00% ) 5,000,000,110 instructions:u # 5.00 insns per cycle ( +- 0.00% ) 1,000,281,902 cycles:u ( +- 0.00% ) </code></pre> <p>As expected, <code>p1</code> and <code>p6</code> are fully utilized by the <code>imul</code> and <code>dec/jnz</code> respectively, and then the <code>add</code> issues <em>roughly</em> half and half between the remaining available ports. Note <em>roughly</em> - the actual ratio is 56% and 44%, and this ratio is pretty stable across runs (note the <code>+- 0.49%</code> variation). If I adjust the loop alignment, the split changes (53/46 for 32B alignment, more like 57/42 for 32B+4 alignment). Now, we if change nothing except the position of <code>imul</code> in the loop:</p> <h3>Example 2</h3> <pre><code>top: imul rax, rbx, 5 xor r9, r9 add r8, rdx dec esi jnz top </code></pre> <p>Then suddenly the <code>p0</code>/<code>p5</code> split is exactly 50%/50%, with 0.00% variation:</p> <pre><code> 500,025,758 uops_dispatched_port_port_0 ( +- 0.00% ) 1,000,044,901 uops_dispatched_port_port_1 ( +- 0.00% ) 500,038,070 uops_dispatched_port_port_5 ( +- 0.00% ) 1,000,066,733 uops_dispatched_port_port_6 ( +- 0.00% ) 5,000,000,439 instructions:u # 5.00 insns per cycle ( +- 0.00% ) 1,000,439,396 cycles:u ( +- 0.01% ) </code></pre> <p>So that's already interesting, but it's hard to tell what's going on. Perhaps the exact behavior depends on the initial conditions at loop entry and is sensitive to ordering within the loop (e.g., because counters are used). This example shows that something more than &quot;random&quot; or &quot;stupid&quot; scheduling is going on. In particular, if you just eliminate the <code>imul</code> instruction from the loop, you get the following:</p> <h3>Example 3</h3> <pre><code> 330,214,329 uops_dispatched_port_port_0 ( +- 0.40% ) 314,012,342 uops_dispatched_port_port_1 ( +- 1.77% ) 355,817,739 uops_dispatched_port_port_5 ( +- 1.21% ) 1,000,034,653 uops_dispatched_port_port_6 ( +- 0.00% ) 4,000,000,160 instructions:u # 4.00 insns per cycle ( +- 0.00% ) 1,000,235,522 cycles:u ( +- 0.00% ) </code></pre> <p>Here, the <code>add</code> is now roughly evenly distributed among <code>p0</code>, <code>p1</code> and <code>p5</code> - so the presence of the <code>imul</code> did affect the <code>add</code> scheduling: it wasn't just a consequence of some &quot;avoid port 1&quot; rule.</p> <p>Note here that total port pressure is only 3 uops/cycle, since the <code>xor</code> is a zeroing idiom and is eliminated in the renamer. Let's try with the max pressure of 4 uops. I expect whatever mechanism kicked in above to able to perfectly schedule this also. We only change <code>xor r9, r9</code> to <code>xor r9, r10</code>, so it is no longer a zeroing idiom. We get the following results:</p> <h3>Example 4</h3> <pre><code>top: xor r9, r10 add r8, rdx imul rax, rbx, 5 dec esi jnz top 488,245,238 uops_dispatched_port_port_0 ( +- 0.50% ) 1,241,118,197 uops_dispatched_port_port_1 ( +- 0.03% ) 1,027,345,180 uops_dispatched_port_port_5 ( +- 0.28% ) 1,243,743,312 uops_dispatched_port_port_6 ( +- 0.04% ) 5,000,000,711 instructions:u # 2.66 insns per cycle ( +- 0.00% ) 1,880,606,080 cycles:u ( +- 0.08% ) </code></pre> <p>Oops! Rather than evenly scheduling everything across <code>p0156</code>, the scheduler has underused <code>p0</code> (it's only executing something ~49% of cycles), and hence <code>p1</code> and <code>p6</code> are oversubcribed because they are executing both their <em>required</em> ops of <code>imul</code> and <code>dec/jnz</code>. This behavior, I think is consistent with a <em>counter-based</em> pressure indicator as hayesti indicated in their answer, and with <strong>uops being assigned to a port at issue-time, not at execution time</strong> as both hayesti and Peter Cordes mentioned. That behavior<sup>3</sup> makes the <em>execute the oldest ready uops</em> rule not nearly as effective. If uops weren't bound to execution ports at issue, but rather at execution, then this &quot;oldest&quot; rule would fix the problem above after one iteration - once one <code>imul</code> and one <code>dec/jnz</code> got held back for a single iteration, they will always be older than the competing <code>xor</code> and <code>add</code> instructions, so should always get scheduled first. One thing I am learning though, is that if ports are assigned at issue time, this rule doesn't help because the ports are pre-determined at issue time. I guess it still helps a bit in favoring instructions which are part of long dependecy chains (since these will tend to fall behind), but it's not the cure-all I thought it was.</p> <p>That also seems to be a explain the results above: <code>p0</code> gets assigned more pressure than it really has because the <code>dec/jnz</code> combo can <em>in theory</em> execute on <code>p06</code>. <em>In fact</em> because the branch is predicted taken it only ever goes to <code>p6</code>, but perhaps that info can't feed into the pressure balancing algorithm, so the counters tend to see equal pressure on <code>p016</code>, meaning that the <code>add</code> and the <code>xor</code> get spread around differently than optimal.</p> <p>Probably we can test this, by unrolling the loop a bit so the <code>jnz</code> is less of a factor...</p> <hr /> <p><sup>1</sup> OK, it is properly written <em>μops</em>, but that kills search-ability and to actually type the &quot;μ&quot; character I'm usually resorting to copy-pasting the character from a webpage.</p> <p><sup>2</sup> I had originally used <code>imul</code> instead of <code>popcnt</code> in the loop, but, unbelievably, _IACA doesn't <a href="https://software.intel.com/en-us/forums/intel-architecture-code-analyzer/topic/296340#comment-1603915" rel="noreferrer">support it_</a>!</p> <p><sup>3</sup> Please note that I'm not suggesting this is a poor design or anything - there are probably very good hardware reasons why the scheduler cannot easily make all its decisions at execution time.</p>
2016-11-18 15:58:25.567000+00:00
2021-12-18 03:12:08.223000+00:00
2021-03-15 21:14:13.830000+00:00
performance|optimization|x86|intel|cpu-architecture
['https://doi.org/10.1145/3319535.3363194']
1
53,963,546
<p>Consider looking into normalized cross correlation. Note that this works best only if the the size and the aspect ratios of the objects dont change between the two images. Here is a link to the matlab function and in fact they are doing something similar to what you want as an example <a href="https://www.mathworks.com/help/images/ref/normxcorr2.html" rel="nofollow noreferrer">MATLAB NCC</a></p> <p>Now suppose the sizes and aspect ratios are changing then thats when sophisticated image processing/machine learning algorithms need to be used. If you can have large amount of training examples then you could use deep learning for this task. My personal choice would be to transform this into a image segmentation problem and then use a <a href="https://arxiv.org/abs/1505.04597" rel="nofollow noreferrer">U-Net</a></p>
2018-12-28 19:40:08.007000+00:00
2018-12-28 19:40:08.007000+00:00
null
null
53,962,004
<p>I'm new to matlab. I was involved in a project in high school. The project will have two pictures with objects in it. There will be more than one object in the first picture. In the second picture there will be only one of the objects in the first picture. The two images are compared and if the object in the second image matches one of the objects in the first picture, only the object will be shown in color in the first picture. Other objects will be displayed in gray.</p> <p>I've done research, but I still don't know how to proceed. How do I follow a path? Or is there an example like this? How can I investigate and learn about this?</p> <p><a href="https://i.stack.imgur.com/AJAj6.jpg" rel="nofollow noreferrer">first image</a> <a href="https://i.stack.imgur.com/vZhmP.jpg" rel="nofollow noreferrer">second image</a> Pictures are in the attachment.</p>
2018-12-28 17:11:23.397000+00:00
2018-12-28 19:40:08.007000+00:00
null
image|matlab|object
['https://www.mathworks.com/help/images/ref/normxcorr2.html', 'https://arxiv.org/abs/1505.04597']
2
59,354,744
<blockquote> <ol> <li>In the paper by Blundell (2015), the coefficient is set to 1/M (where M is the number of mini-batches). In the example given by TFP, the coefficient is given as 1/mnist_data.train.num_examples. Why?</li> </ol> </blockquote> <p>In the BBB paper eq. 8, they refer to M being the number of mini-batches. To be consistent with the non-stochastic gradient learning, it should be scaled by the number of mini-batches which is what is done by <a href="https://papers.nips.cc/paper/4329-practical-variational-inference-for-neural-networks.pdf" rel="nofollow noreferrer">Graves</a>. Another alternative is that done in eq. 9, where they scale it by <code>\pi_i</code>, where the sum of all the values in the set <code>{\pi}</code> sum to one.</p> <p>In the TFP example, it does look like the <code>num_examples</code> is the total number of independent samples within the training set, which is much larger than the number of batches. This is goes by a few names, such as <a href="https://arxiv.org/pdf/1910.09227v1.pdf" rel="nofollow noreferrer">Safe Bayes</a> or <a href="https://arxiv.org/pdf/2002.08791.pdf" rel="nofollow noreferrer">Tempering</a>. Have a look at sec. 8 of <a href="https://arxiv.org/pdf/2002.08791.pdf" rel="nofollow noreferrer">this paper</a> for some more discussion about the use of tempering within Bayesian inference and it's suitability.</p> <blockquote> <p>As I go from 2d input to 3d images volumes, the KL loss is still significantly larger (~1k) than the cross-entropy (~1), even after dividing by mnist_data.train.num_examples. Why?</p> </blockquote> <p>The ELBO will always be larger than just your cross-entropy (which defines your likelihood). Have a look at how the KL divergence term in the ELBO is found. (and a full mean-field approach where each weight/parameter is assumed to be independent).</p> <p>Since the assumed posterior is factorised (assume each parameter is independent), can write the joint distribution as a product. This means when you take the log when you are computing the KL between the approx. posterior and the prior, you can write it as a sum of the KL terms between each parameter. Since the KL is >= 0, for each parameter you add to your model you will be adding another positive term to your ELBO. This is likely why your loss is so much more for your 3D model, likely because there is more parameters.</p> <p>Another reason this could occur is if you have less data (your M is smaller, than the KL term is weighted less).</p> <blockquote> <p>What is the guideline for getting a proper value for this coefficient? Maybe like the two-loss terms should be the same order of magnitude?</p> </blockquote> <p>I am unsure of any specific guideline, for training you are interested primarily in the gradients. A large loss does not mean a large gradient. Have a look at the gradients contributed by the negative log likelihood and the KL term in your ELBO. If the KL term is too large, you probably need a more informative prior or more data (you could simply scale the KL term but this feels a bit yucky for the Bayesian in me).</p> <blockquote> <p>The current coefficient only takes care of the number of training samples, but not the network complexity or the number of parameters in the network, which I assume the KL loss increase with the complexity of the model.</p> </blockquote> <p>Yes, as stated before, in general, more parameters == greater ELBO (for a mean-field approach as used in Bayes by Backprop).</p> <blockquote> <p>I am trying to implement a neural network with the KL loss, without using keras.model.losses, as some software production and hardware support limitation. I am trying to train my model with TF 1.10 and TFP 0.3.0., the issue is that for tf&lt;=1.14, tf.keras.model does not support tf.layers inside the Keras model, so I can't use my original model straight away. Is there a way to get the KL loss, not from model.losses, but from layers or weights of the network in a TF construct?</p> </blockquote> <p>I am unsure about the best way to tackle this part of it. I would be cautious about going to older versions where it isn't explicitly supported. They put those warnings/exceptions in for a reason.</p> <blockquote> <p>Is batch normalization or group normalization still helpful in Bayesian deep learning?</p> </blockquote> <p>For variational inference (as done in Bayes by Backprop) Batchnorm is fine. For sampling methods such as MCMC, Batch normalization is no longer suitable. Have a look at <a href="https://arxiv.org/pdf/1908.03491v1.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1908.03491v1.pdf</a> for info on suitability for batch norm with sampling methods for approx. Bayesian inference.</p>
2019-12-16 10:29:23.617000+00:00
2020-05-12 05:35:47.640000+00:00
2020-05-12 05:35:47.640000+00:00
null
58,848,341
<p>I have been trying to conduct a few experiments using TensorFlow Probability (TFP), and I got a few questions.</p> <ol> <li><p>What is the proper value of the coefficient of the KL loss?</p> <ol> <li><p>In the paper by Blundell (2015), the coefficient is set to <code>1/M</code> (where <code>M</code> is the number of mini-batches). In the example given by TFP, the coefficient is given as <code>1/mnist_data.train.num_examples</code>. Why?</p></li> <li><p>As I go from 2d input to 3d images volumes, the KL loss is still significantly larger (~1k) than the cross-entropy (~1), even after dividing by <code>mnist_data.train.num_examples</code>. Why?</p></li> <li><p>What is the guideline for getting a proper value for this coefficient? Maybe like the two-loss terms should be the same order of magnitude?</p></li> <li><p>The current coefficient only takes care of the number of training samples, but not the network complexity or number of parameters in the network, which I assume the KL loss increase with the complexity of the model.</p></li> </ol></li> <li><p>I am trying to implement a neural network with the KL loss, without using <code>keras.model.losses</code>, as some software production and hardware support limitation. I am trying to train my model with TF 1.10 and TFP 0.3.0., the issue is that for <code>tf&lt;=1.14</code>, <code>tf.keras.model</code> does not support <code>tf.layers</code> inside the Keras model, so I can't use my original model straight away. Is there a way to get the KL loss, not from <code>model.losses</code>, but from layers or weights of the network in a TF construct?</p></li> <li><p>Is batch normalization or group normalization still helpful in Bayesian deep learning?</p></li> </ol>
2019-11-14 02:07:34.373000+00:00
2020-05-12 05:35:47.640000+00:00
2020-01-27 01:24:05.750000+00:00
tensorflow|bayesian|tensorflow-probability
['https://papers.nips.cc/paper/4329-practical-variational-inference-for-neural-networks.pdf', 'https://arxiv.org/pdf/1910.09227v1.pdf', 'https://arxiv.org/pdf/2002.08791.pdf', 'https://arxiv.org/pdf/2002.08791.pdf', 'https://arxiv.org/pdf/1908.03491v1.pdf']
5
61,151,950
<p>This network structure relaced the fully connected layer with global average pooling. Classical network structure model <a href="https://arxiv.org/abs/1312.4400" rel="nofollow noreferrer">Network in Network</a> use this.</p>
2020-04-11 03:56:15.510000+00:00
2020-04-11 03:56:15.510000+00:00
null
null
61,150,929
<p>The example from <a href="https://pytorch.org/tutorials/beginner/nn_tutorial.html" rel="nofollow noreferrer">PyTorch's official tutorial</a> has the following ConvNet. My understanding is that the output layer uses a softmax to estimate the digit an image corresponds to. Why doesnt the code have a softmax layer or fully connected layer? </p> <pre><code>model = nn.Sequential( nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.AdaptiveAvgPool2d(1), Lambda(lambda x: x.view(x.size(0), -1)), ) opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9) </code></pre>
2020-04-11 01:16:34.430000+00:00
2020-04-11 03:56:15.510000+00:00
null
pytorch
['https://arxiv.org/abs/1312.4400']
1