question_id
string | site
string | title
string | body
string | link
string | tags
list | votes
int64 | creation_date
timestamp[s] | comments
list | comment_count
int64 | category
string | diamond
int64 |
---|---|---|---|---|---|---|---|---|---|---|---|
301
|
math
|
Proving the number circle internet meme, where sum of each adjacent numbers is a perfect square
|
I came across this internet meme and one of my friend mentioned that he thinks that such number circle would always exists for any big enough integer. We tried to prove the hypothesis, but could not find a proof.
We defined the circle to be a Hamiltonian cycle of a graph $G_n$ where $V(G_n) = \{1, ..., n\}$ and $$E(G_n) = \{(v_1, v_2)\mid v_1, v_2 \in V,~i \in \Bbb Z \text{ where } v_1 + v_2 = i^2 \}$$
My attempt was to show that there exists a hamiltonian path in $G_n$ that starts and ends for vertex of neighbors of $n+1$, but without success.
PS: I built a simple program that brute forcely computes for a hamiltonian cycle of such graph and got this for result
31 : false
32 : 1-8-28-21-4-32-17-19-30-6-3-13-12-24-25-11-5-31-18-7-29-20-16-9-27-22-14-2-23-26-10-15
33 : 1-8-28-21-4-32-17-19-30-6-3-13-12-24-25-11-5-20-29-7-18-31-33-16-9-27-22-14-2-23-26-10-15
34 : 1-3-13-12-4-32-17-8-28-21-15-34-30-19-6-10-26-23-2-14-22-27-9-16-33-31-18-7-29-20-5-11-25-24
35 : 1-3-6-19-30-34-2-7-18-31-33-16-9-27-22-14-11-25-24-12-13-23-26-10-15-21-28-8-17-32-4-5-20-29-35
36 : 1-3-6-19-30-34-2-23-26-10-15-21-4-32-17-8-28-36-13-12-24-25-11-5-20-29-7-18-31-33-16-9-27-22-14-35
37 : 1-3-22-14-35-29-20-5-11-25-24-12-37-27-9-16-33-31-18-7-2-34-15-21-4-32-17-19-30-6-10-26-23-13-36-28-8
...
58 : 1-3-6-10-15-21-4-5-11-25-39-42-58-23-13-36-28-53-47-2-34-30-51-49-32-17-19-45-55-26-38-43-57-24-40-41-8-56-44-37-12-52-48-33-31-50-14-22-27-54-46-18-7-9-16-20-29-35
I don't know if this is any clue, but nearly every path segments between each $n$ are reused either partially or in reversed order.
PS2: If anyone is interested in generating their own series, to reduce the burden, here is a javascript source code of generating brute force Hamiltonian cycle of the graph.
var size = 32;
var maxSqrt = () => Math.sqrt(size * 2 - 1) | 0;
function isSquare(x) {
return x > 0 && Math.sqrt(x) % 1 === 0;
}
function foo(n = 1, step = 0, visited = new Array(size).fill(false)) {
if (step == size - 1) {
if (isSquare(n + 1)) {
return n;
} else {
return false;
}
}
visited[n - 1] = true;
//find edges
var minSqrt = Math.ceil(Math.sqrt(n));
minSqrt = minSqrt === 1 ? 2 : minSqrt;
//recursivly run edges
for (var i = minSqrt; i <= maxSqrt(); i++) {
var target = i * i - n;
if (n != target && !visited[target - 1] && target <= size && target > 0) {
var temp = foo(target, step + 1, visited.slice(0));
if (!temp) continue;
else {
return n + "-" + temp;
}
}
}
return false;
}
console.log(size + " : " + foo());
PS3: The question about Hamiltonian path of n numbers seems relevant to this question
|
https://math.stackexchange.com/questions/4320839
|
[
"graph-theory",
"hamiltonian-path"
] | 26 | 2021-11-30T22:02:26 |
[
"This question is similar to: Generalisation of this circular arrangement of numbers from $1$ to $32$ with two adjacent numbers being perfect squares. If you believe it’s different, please edit the question, make it clear how it’s different and/or how the answers on that question are not helpful for your problem.",
"This seems to be a duplicate of math.stackexchange.com/questions/4289064/…",
"@PrasunBiswas nice. The number of circular solutions (as wanted in this question is in: A071984",
"I noticed that Gerbicz's method uses T(c)=25*a[1]+c,25*a[2]-c,25*a[3]+c,25*a[4]-c,...,25*a[n]+(-1)^(n+1)*c this formula and it's property to compute for segments of sequences for the hamiltonian cycle. But how can you be sure that there exists glue between each sequences?",
"@PrasunBiswasm, that seems worth posting as an answer!",
"FYI, the Mathematica command FindHamiltonianCycle[RelationGraph[(Sqrt[#1 + #2] == Floor[Sqrt[#1 + #2]]) && (#1 != #2) &, Range[k]]] generates a cycle for the numbers $1$ to $k$. (I suspect Gerbicz' dedicated algorithm is more efficient.)",
"Here's an efficient algorithm by Gerbicz for finding Hamiltonian cycles for $n\\ge 32$.",
"A bit of searching informed me that this conjecture was proved in 2018 by Robert Gerbicz (there is a Hamiltonian cycle for the square-sum chain for all $n\\ge 32$) (see this Mersenneforum post). Related OEIS sequences: 1, 2 and a few others you can find linked there. There's also some Numberphile videos on YouTube on this, see here and here.",
"Sorry about the syntax. I'm new to math stackexchange.",
"See MathJax tutorial and quick reference for formatting."
] | 0 |
Science
| 0 |
302
|
math
|
When is a polynomial contained in the ideal generated by its partial derivatives?
|
Let $R = k[x_1,\dots,x_n]$ be a multivariate polynomial ring over a field $k$ of characteristic zero, and let $f\in R$.
Is there an easy-to-test necessary and sufficient condition on $f$ such that $f$ is in the ideal of $R$ generated by its partial derivatives $\partial_if$?
Geometrically, $f\in \left(\partial_1 f,\dots, \partial_n f\right)$ is the statement that the map $f:\mathbb{A}^n\rightarrow\mathbb{A}^1$ is a submersion anywhere it is nonzero. (Edit per a comment of @Evangelion045: this isn't quite true. $f\in(\partial_1f,\dots,\partial_n f)$ implies that $f$ is a submersion everywhere it's nonzero, but in the opposite direction, $f$ being a sumbersion everywhere it's nonzero only implies $f$ is in the radical of the ideal generated by its partials.)
A sufficient condition is if $f$ is homogeneous of degree $d$, due to Euler's formula
$$ d\cdot f = \sum_i x_i\partial_i f$$
I was led to the question by the surprise discovery that the statement is also true if $f$ happens to be the resolvent cubic of a monic quartic polynomial $x^4 - \sigma_1x^3 +\sigma_2x^2 - \sigma_3x +\sigma_4$, which is an inhomogeneous polynomial of degree three in the four coefficients $\sigma_1,\dots,\sigma_4$ plus an auxiliary variable $\Lambda$. But the resolvent cubic is, to be sure, weighted homogeneous (with weight $i$ for $\sigma_i$ and weight $2$ for $\Lambda$). And sure enough, the proof for Euler's formula generalizes to this case. We have
$$ d\cdot f = \sum_i d_ix_i\partial_if$$
where $d_i$ is the weight of $x_i$ and $d$ is the weighted degree of $f$. Thus the existence of a set of weights making $f$ weighted homogeneous is also a sufficient criterion. (A google search revealed an analytic generalization in a book of Arnol'd, citing Saito.)
However, it's not quite necessary: any (nonconstant) linear function also trivially has $f\in \left(\partial_1 f,\dots,\partial_n f\right)$ since the latter is the unit ideal, while if its constant term is nonzero then it is not homogeneous with respect to any set of (positive integer) weights.
Do you know a condition that is both sufficient and necessary?
|
https://math.stackexchange.com/questions/1700528
|
[
"algebraic-geometry",
"polynomials",
"commutative-algebra"
] | 26 | 2016-03-16T10:51:14 |
[
"Thanks for answering @BenBlum-Smith.",
"Hi @Evangelion045 - I think you're right. It's true that the hypothesis about $f$ being in the ideal implies that the map it induces is a submersion everywhere it's nonzero, but the geometric hypothesis only implies $f$ is in the radical. There's no reason for the ideal to be radical, so that seems the best you can do. I guess I was being sloppy. I'll edit.",
"@BenBlum-Smith Hi. I am trying to understand why the condition on $f$ being a submersion anywhere it is nonzero, is equivalent to what you asked; but, using the Nullstellensatz, I can only arrive at $f\\in\\sqrt{(\\partial_1 f,\\ldots,\\partial_n f)}.$ Do you need further assumptions on $f$? I cannot see how to conclude that it is in the ideal, not just in the square root.",
"In particular, this example shows that it is easier to test directly if a polynomial is in the ideal generated by its partial derivatives than to test if it is the automorphic image of a weighed homogeneous polynomial.",
"The proof that $f$ is not a the automorphic image of a weighted homogeneous polynomial is quite long. One should take the inverse automorphism $\\phi^{-1}$ and divide the proof into the two cases $deg(\\phi^{-1}(x))=deg(\\phi^{-1}(y))=1$, and the alternative (one of the images has degree greater than 1).",
"The criterion is too weak, but finding counterexamples is not easy. I was playing around with some polynomials trying to find a counterexample to the Jacobian conjecture in dimension 2, and I had a (nontrivial) pair $P,Q$ with $J(P,Q)=x$. Then applying the endomorphism $x\\mapsto x$ and $y\\mapsto xy$ I obtained that the image of $P$, which is the $f$ in the other comment, satisfies the property.",
"@san - how did you come up with that $f$ and what's the argument it's not the automorphic image of any weighted homogeneous polynomial? (Aside: the weighted-homogeneous [or automorphic image thereof] criterion seemed way too weak in any case, because it requires the expression for $f$ as a linear combination of its partials to have a very particular form.)",
"For example, $f(x,y)=x^2y+x^2y^2+6x^4y^3+9x^6y^4$ is in the ideal generated by its partial derivatives, but its not the automorphic image of any weighted homogeneous polynomial.",
"Building on Eric Wofsey's comment, I don't think one can determine a condition that is both sufficient and necessary. Every automorphic image of a weighted homogeneous polynomial will satisfy the property (this includes the affine functions), but so will the hypothetical counterexamples of the unsolved Jacobian Conjecture, if this conjecture is false.",
"@san - not a linear form or a linear functional but a linear function. Affine function, if you prefer.",
"How a linear function can have a non-zero constant term?",
"Note that this condition can be expressed using only the $k$-algebra structure of $R$: $f$ is in the ideal generated by its partial derivatives iff there exists a $k$-linear derivation $d:R\\to R$ such that $d(f)=f$. In particular, this means that the condition is preserved by any automorphisms of $R$."
] | 0 |
Science
| 0 |
303
|
math
|
Does the category of algebraically closed fields of characteristic $p$ change when $p$ changes?
|
EDIT I've now posted this question on mathoverflow. It probably makes sense to post answers over there, unless someone prefers posting here.
Let $\mathrm{ACF}_p$ denote the category of algebraically closed fields of characteristic $p$, with all homomorphisms as morphisms. The question is: when is there an equivalence of categories between $\mathrm{ACF}_p$ and $\mathrm{ACF}_l$ (with the expected answer being: only when $p=l$)?
Here are some easy observations:
First, any equivalence of categories must do the obvious thing on objects, preserving transcendence degree because $K$ has a smaller transcendence degree than $L$ if and only if there is a morphism $K \to L$ but not $L \to K$.
We can distinguish the case $p=0$ from the case $p \neq 0$ because $\mathrm{Gal}(\mathbb{Q}) \not \cong \mathrm{Gal}(\mathbb{F}_p)$. But $\mathrm{Gal}(\mathbb{F}_p) \cong \hat{\mathbb{Z}}$ for any prime $p$. So we can't distinguish $\mathrm{ACF}_p$ from $\mathrm{ACF}_l$ for different primes $p \neq l$ in such a simple-minded way. So for the rest of this post, let $p,l$ be distinct primes.
The next guess is that maybe we can distinguish $\mathrm{ACF}_p$ from $\mathrm{ACF}_l$ by seeing that $\mathrm{Aut}(\overline{\mathbb{F}_p(t)}) \not \cong \mathrm{Aut}(\overline{\mathbb{F}_l(t)})$. To this end, note that there is a tower $\mathbb{F}_p \subset \overline{\mathbb{F}_p} \subset \overline{\mathbb{F}_p}(t) \subset \overline{\mathbb{F}_p(t)}$. The automorphism groups of these intermediate extensions are respectively $\hat{\mathbb{Z}}$, $\mathrm{PGL}_2(\overline{\mathbb{F}_p})$, and a free profinite group (the last one is according to wikipedia).
From (3), there is at least a subquotient $\mathrm{PGL}_2(\overline{\mathbb{F}_p})$ which looks different for different primes. But I don't see how to turn this observation into a proof that $\mathrm{Aut}(\overline{\mathbb{F}_p(t)}) \not \cong \mathrm{Aut}(\overline{\mathbb{F}_l(t)})$ specifically, or that $\mathrm{ACF}_p \not \simeq \mathrm{ACF}_l$ more generally.
Note that if we change "algebraically closed fields of characteristic $p$" to "fields of characteristic $p$", then its easy to distinguish these categories because their subcategories of finite fields look very different.
Also, there is a natural topological enrichment of $\mathrm{ACF}_p$ where one gives the homset the topology of pointwise convergence. I'd be interested to hear of a way to distinguish these topologically-enriched categories.
|
https://math.stackexchange.com/questions/1719104
|
[
"number-theory",
"category-theory",
"field-theory",
"galois-theory",
"positive-characteristic"
] | 26 | 2016-03-29T10:49:31 |
[
"@EricWofsey yeah, I guess I did gloss over that.",
"Your first observation is not as trivial as you make it sound: it uses the fact that the totally ordered class of all cardinal numbers has no automorphisms besides the identity.",
"@tcamps Yeah, right; but the main thing is that there are automorphisms",
"@egreg Both these automorphism groups are cyclic of order 2, generated by a Frobenius element, right?",
"I see. My bad, I always forget morphisms.",
"@Crostul The automorphism group of the four element field is not isomorphic to the automorphism group of the nine element field. But just looking at the four element field suffices to prove your assertion is false.",
"No, not the same morphisms (even though it's true that the categories of finite fields for all $p$ are equivalent).",
"I'm taking all field homomorphisms as morphisms. So the subcategory of finite fields includes the automorphism group of each finite field. But this... actually is not different! So I think you're right to disagree after all!",
"I don't agree. The subcategories of finite fields are all equivalent to the poset $(\\Bbb{N} , \\mbox{divides})$.",
"Note that if we change \"algebraically closed fields of characteristic $p$\" to \"fields of characteristic $p$\", then its easy to distinguish these categories because their subcategories of finite fields look very different."
] | 0 |
Science
| 0 |
304
|
math
|
Finite-Dimensional Homogeneous Contractible Spaces
|
Suppose that $X \subset \mathbb{R}^n$ is compact, homogeneous and contractible (and thus connected). Does $X$ have to be a point?
I couldn't think of a non-trivial example, and there isn't a counterexample in the plane. The homogeneous planar continua have been classified (point, circle, pseudo-arc, circle of pseudo-arcs) and the only contractible one is a point. Maybe there is some twisty sort of example in three or four dimensions, though?
Does it become true if $\dim(X) = n $ and is embeddable in $\mathbb{R}^{n+1}$?
By homogeneous I mean for any $x, y \in X$ there is a homeomorphism $f$ of $X$ with $f(x) = y$. By contractible I mean the identity map on $X$ is homotopic to a constant map. For example the circle is homogeneous but not contractible, and the closed disc is contractible but not homogeneous.
|
https://math.stackexchange.com/questions/4025983
|
[
"general-topology",
"algebraic-topology",
"examples-counterexamples",
"geometric-topology",
"continuum-theory"
] | 25 | 2021-02-14T15:24:24 |
[
"@JohnSamples yes : )",
"It's infinite-dimensional, right?",
"What about $S^{\\infty}$?",
"I also just realized that the homogeneous planar continua are known to be the point, the circle, the pseudo-arc and the circle of pseudo-arcs, so that case is actually known. Going to edit the post.",
"Fun fact: if $G$ is a compact topological group and $G$ is contractible, then $G$ is the trivial group. This rules out such objects from being counterexamples.",
"Nice observation @AdamChalumeau",
"For those of you (like me) who where looking for an example which is a manifold, this is not possible: a compact simply-connected manifold is orientable so it has $\\mathbb Z$ for top homology. Hence, by homogeneity, no neighborhood of any point is homeomorphic to a disk, thus the \"local topology\" of a possible counterexample must be pretty nasty.",
"Another easy observation: if $X$ contractible is relaxed to $\\pi_n(X)=0$ for $n\\geq 1$ then the pseudo-arc is a (connected) example",
"Ok, I added the relevant definitions and removed the superfluity of \"connected.\" Is there a way to adjust Bing's \"two drill bits\" space, perhaps? I forget the exact properties of that space.",
"@AlessandroCodenotti You are right, I didn't properly see that you wrote $\\mathbb N$; I read $N$.",
"Suppose that $X$ is a contractible CW complex. If $\\dim X = 1$, then $X$ is a tree. Serre proves in Trees that any action of a finite group will have fixed points. That any finite $2$-dimension $G$-complex (with $G$ finite) admits fixed points is the Casacuberta-Dicks conjecture. They proved it originally for $X$ acyclic and $G$ solvable. So even though this is not exactly what you asked for, the problem seems hard (maybe the fact that we can work with the whole group $\\mathsf {Homeo}(X)$ makes things easier, I don't know).",
"@AlessandroCodenotti $[0,1]^N$ is not homogeneous. No homeomorphism maps a boundary point to an interior point.",
"An easy observation: there are such continua in $\\Bbb R^\\Bbb N$, for example $[0,1]^\\Bbb N$. (Also \"connected\" is redundant in your list of properties, being implied by \"contractible\")",
"Ah, right. Thanks :)",
"@guidoar $\\mathrm{Homeo}(X)$ acts transitively on $X$. In other words for all $x,y\\in X$ there is an homeomorphism $f\\colon X\\to X$ with $f(x)=y$.",
"What's the definition of homogeneous?"
] | 0 |
Science
| 0 |
305
|
math
|
Maximizing $\sum_{i,j=1}^{n}|\operatorname{deg}\ x_{i}-\operatorname{deg}\ x_{j}|^{3}$ over all simple graphs with $n$ vertices
|
For a simple graph $G$ on $n$ vertices, let us define
$$\mathcal{I}_{n}(G)=\sum_{i,j=1}^{n}|\operatorname{deg}\ x_{i}-\operatorname{deg}\ x_{j}|^{3}$$
I am highly interested in finding $\sup \mathcal{I}_{n}$ over all graphs with $n$ vertices (or at least some tight upper bound). What I have tried myself, was noticing that $\mathcal{I}_{n}$ must be maximized by a threshold graph:
Sketch of proof: This index $\mathcal{I}_{n}$ is a convex function of degree sequence
$\operatorname{deg} \ x_{1},...,\operatorname{deg} \ x_{n}$. Call the set of all such graphic sequences $D$. Then we
can look on $D^{∗}=\mathcal{Con}D$ - a convex hull of $D$. It must then attain it's
maximum on some extreme point of $D^{∗}$. It can be shown, that such
extreme points of $D^{*}$ are exactly those corresponding to threshold
graphs.
But this didn't lead me too far. I will be glad for any insight.
Simulations show that the optimum occurs for $k$ isolated vertices and a complete graph on the other $n−k$ where $k=\lfloor\frac{n+1}5\rfloor$. The same count occurs for $k$ vertices of degree $n−1$ and no other edges so the other $n−k$ have degree $k$. Do You have any ideas on how to prove it?
|
https://math.stackexchange.com/questions/3564278
|
[
"combinatorics",
"graph-theory",
"discrete-optimization",
"extremal-combinatorics"
] | 24 | 2020-02-29T06:41:46 |
[] | 0 |
Science
| 0 |
306
|
math
|
Is there an endofunctor of the category of sets that maps $\kappa$ to $\kappa^+$?
|
For already some time I am slightly bothered by the following question about endofunctors of the category of sets: Is there an endofunctor of set which maps each infinite cardinal $\kappa$ to a set of size $\kappa^+$ (i.e., the first larger cardinality)?
It is clear that under GCH, an example of such a functor is the power set functor as $2^\kappa = \kappa^+$. But without this axiom, I don't know of any other example. Anyway, I was interested whether one can construct such functor without any additional set axioms. To rephrase the question once mo: Is it always true under ZFC that there exists an endofunctor of the category of sets that maps $\kappa$ to a set of size $\kappa^+$?
My master thesis advisor suggested that such functor might be found as a subfunctor of the powerset functor, but I never managed to make it work. Am I missing something?
|
https://math.stackexchange.com/questions/2674160
|
[
"category-theory",
"set-theory"
] | 24 | 2018-03-02T12:44:24 |
[
"@EricWofsey About the extension to finite cardinals, I think there is one. The requirement for finite sets is very strong. If you want to map $n$ to $n+1$ you are forced to define it as $F(x) = x \\cup \\{*\\}$ where $*$ is a new point. Anyway, thankfully I can use the point $*$ for my extension. Let us first define $$ P_{\\geq\\kappa} X = \\{ A \\subseteq X : |A| \\geq \\kappa \\} \\cup \\{*\\} $$ this extends to maps as $ P_{\\geq\\kappa} f(A) = f(A) $ if $ |f(A)| \\geq \\kappa $ and $*$ otherwise. Now, $FX = P_{\\geq \\aleph_0}X + X$ if my cardinal arithmetic does not fail me.",
"@EricWofsey I like the construction of $\\kappa^+$, thanks for sharing. It's good to know that there is such a thing.",
"Anyways, it seems highly likely to me that the answer to your question is \"no\", but proving a negative answer would of course require some serious set theory since such a functor does exist in some models.",
"Incidentally, it is not clear to me that even assuming GCH there is a functor which sends $\\kappa$ to a set of size $\\kappa^+$ for all cardinals (including finite ones).",
"Note that there is a \"canonical\" way to construct a set of size $|X|^+$ from a set $X$: take the set of isomorphism classes of well-orderings of subsets of $X$. This construction is functorial with respect to bijections in an obvious way (which turns out to be rather trivial; every automorphism gets sent to the identity). Unfortunately this cannot be extended to be functorial with respect to all functions.",
"@Berci: But global choice is not a theorem of ZFC. Not even global choice \"from a parameter\". Also, how does this definition preserves the identity and composition (together)?",
"But you don't mention any condition for the functor, so you can just fix an element from each nonempty set (using choice, of course) and send all functions to the constant one, mapping everything to the picked element of the codomain.",
"@NoahSchweber Thanks, I am just little slow.",
"@Jakub Yes, it's clearly first-order definable.",
"Also what exactly do you mean by definable? Is it first order definable?",
"You still need to define the functor on morphisms which is definetely not easy.",
"I'm confused. Isn't the fact that the successor function $\\kappa\\mapsto\\kappa^+$ is itself definable from $\\sf ZF$ is enough to conclude there is such functor?"
] | 0 |
Science
| 0 |
307
|
math
|
Relationship between intersection and compositum of fields
|
This issue came up in a number theory lecture today. Let $K$ be a number field and let $L/K$ be an abelian (finite Galois) extension. Then there exists a primitive $m$th root of unity $\zeta_m$ so that $K(\zeta_m)\cap L=K$ so that $m$ satisfies a number of nice qualities.
We tried to apply the following fact: if $(m,p)=1$ for every prime $p\in\mathbb Z$ ramifying in $\mathcal O_L$, then $\mathbb Q(\zeta_m)\cap L=\mathbb Q$.
The issue in moving this fact up to $K/\mathbb Q$ is that, for general fields $E_1,E_2,E_3$, $E_1(E_2\cap E_3)\neq E_1E_2\cap E_1E_3$. However in our case, we do have that $K(\zeta_m)\cap L=K$, where $m$ is chosen via primes ramifying from the base field $\mathbb Q$. Perhaps if $K/\mathbb Q$ were Galois, this might be easier, but the question is this:
Given three fields $F,K,L$ contained in some larger field $M$, under what (minimal) conditions do we have $F(K\cap L)=FK\cap FL$?
|
https://math.stackexchange.com/questions/674591
|
[
"number-theory",
"field-theory",
"galois-theory",
"algebraic-number-theory",
"class-field-theory"
] | 24 | 2014-02-12T19:13:34 |
[] | 0 |
Science
| 0 |
308
|
math
|
When does a modular form satisfy a differential equation with rational coefficients?
|
Given a modular form $f$ of weight $k$ for a congruence subgroup $\Gamma$, and a modular function $t$ for $\Gamma$ with $t(i\infty)=0$, we can form a function $F$ such that $F(t(z))=f(z)$ (at least locally), and we know that this $F$ must now satisfy a linear ordinary differential equation
$$P_{k+1}(T)F^{(k+1)} + P_{k}(T)F^{(k)} + ... + P_{0}(T)F = 0$$
Where $F^{(i)}$ is the i-th derivative, and the $P_i$ are algebraic functions of $T$, and are rational functions of $T$ if $t$ is a Hauptmodul for $X(\Gamma)$.
My question is the following:
given a modular form $f$, what are necessary and sufficient conditions for the existence of a modular function $t$ such that the $P_i(T)$ are rational functions?
For example, the easiest sufficient condition is that $X(\Gamma)$ has genus 0, by letting $t$ be a Hauptmodul.
But, this is not necessary, as the next condition will show.
Another sufficient condition is that $f$ is a rational weight 2 eigenform. I can show this using Shimura's construction* of an associated elliptic curve, and a computation of a logarithm for the formal group in some coordinates (*any choice in the isogeny class will work).
Trying to generalise, I have thought of the following: if $f$ is associated to a motive $h^i(V)$ of a variety $V$, with Artin-Mazur formal group $\Phi^i(V)$ of dimension 1, then we can construct formal group law a-la Stienstra style, and get a logarithm using the coefficients of powers of certain polynomials. Since the dimension is 1, there will actually be a single polynomial that we take powers of, making the coefficients have a rather simple recurrence relation, forcing our $P_i$ to be rational.
Now, some people, without naming names, believe that rational eigenforms should correspond to the middle cohomology of certain rational Calabi-Yai varieties. I'm not entirely certain that such people exist. Probably.
If this is true, then this should answer my question for rational eigenforms.
Putting non-eigenforms aside, since I'm not interested as much in them, we are left with non-rational eigenforms. We can try to perform the same Stienstra construction, but this time we get that the galois orbit of $f$ is associated to a "formal group law" of a motive with dimension greater than one. This will make for an interesting recurrence for the vector of the galois orbit, but not necessarily for each form individually, as the isomorphism of formal groups laws (between Stienstra's and those with the modular forms as logarithm) will scramble them together.
I realise this last paragraph might be difficult to understand, for the wording is clumsy, and the mathematical notions are even worse. If you're really interested in this, I'd be happy to elaborate.
|
https://math.stackexchange.com/questions/338453
|
[
"complex-analysis",
"number-theory",
"algebraic-geometry",
"modular-forms"
] | 24 | 2013-03-22T19:43:17 |
[] | 0 |
Science
| 0 |
309
|
math
|
Analyzing a class of vertex-deletion games
|
As part of the discussion on this question (Permutation Game Redux), a simple vertex-deletion game was proposed. The game is very simple.
Disconnect. Players alternately remove vertices from a graph $G$. The player that produces a fully disconnected graph (i.e., a graph with no edges) is the winner.
Because the game is impartial, the Sprague-Grundy theory applies: each game is equivalent to a nim-heap of some size (its nim-value), which can be calculated as the mex (minimum excluded nim-value) of its options. These nim-values can then be used to compute the nim-values of disjunctive sums of games in the usual way.
One would like to apply this theory within a single game, e.g., to break a graph into its connected components, calculate their nim-values, and then combine them to find the value of the overall graph. Unfortunately, this doesn't work. The problem is that the win condition is not standard: the game ends before all moves are exhausted (or, equivalently, the allowed moves in one component of the graph depend on the other components).
It is not hard to see that for any graph $G$ and any even $n$, the game $G \cup \bar{K}_n$ is equivalent to $G$ (where $\bar{K}_n$ is the edgeless graph on $n$ vertices). To prove it, we need to show that the disjunctive sum $G + G\cup\bar{K}_n$ is a second-player win. The proof is by induction on $|G|+n$. If $G$ is edgeless, then the first player loses immediately (both games are over). Otherwise, the first player can move in either $G$, and the second player can copy his move in the other one (reducing to $G' + G'\cup \bar{K_n}$ with $|G'|=|G|-1$); or, if $n\ge 2$, the first player can move in the disconnected piece, and the second player can do the same (reducing to $G + G\cup\bar{K}_{n-2}$).
This shows that any graph $G$ is equivalent to $H \cup K_p$, where $H$ is the part of $G$ with no disconnected vertices, and $p=0$ or $1$ is the parity of the number of disconnected vertices in $G$. All games in an equivalence class have the same nim-value, and moreover, the equivalence relation respects the union operation: if $G \sim H \cup K_p$ and $G' \sim H' \cup K_{p'}$ then $G \cup G' \sim (H \cup H')\cup K_{p\oplus p'}$. Moreover, one can see that the games in $[H \cup K_0]$ and $[H \cup K_1]$ have different nim-values unless $H$ is the null graph: when playing $H + H \cup K_1$, the first player can take the isolated vertex, leaving $H+H$, and then copy the second player's moves thereafter.
Beyond this, are there any other general decomposition or equivalence results? Any extension of the Sprague-Grundy theory to this class of games? In particular, is there some more refined equivalence relation still to be found such that all games in $[G]$ have the same nim-value, and $[G \cup H]$ can be determined in terms of $[G]$ and $[H]$?
|
https://math.stackexchange.com/questions/95895
|
[
"graph-theory",
"combinatorial-game-theory"
] | 24 | 2012-01-02T12:39:17 |
[
"I see -- interesting -- the supposed proof of equivalence fails because it's not always possible to mirror moves because, as you wrote, a move's admissibility depends on the other components, and it works in your special case because disconnected vertices are the only type of components that don't affect admissibility.",
"@joriki: I thought that initially as well, but in fact it's not the case. As a counterexample, consider the graph $K_1 \\cup G \\cup G$, where $G$ is not edgeless. Your argument suggests that this should be equivalent to $K_1$, which is a second-player win (since it is disconnected, there are no more moves allowed). But in fact the first player has a winning strategy: first delete the $K_1$, producing $G \\cup G$, and then copy the second player's moves. In other words, $K_1 = 0$ and $G \\cup G = 0$, but $K_1 \\cup (G \\cup G) \\neq K_1 + (G \\cup G)$.",
"Your insight generalizes to all connected components, not just disconnected vertices. The game is completely determined by the parities of the numbers of instances of all types of connected components; adding two isomorphic connected components leads to an equivalent game, since you can always play in one when the other player plays in the other, until they're reduced to an even number of disconnected vertices, which you've handled."
] | 0 |
Science
| 0 |
310
|
math
|
Can a free complete lattice on three generators exist in $\mathsf{NFU}$?
|
Also asked at MO.
It's a fun exercise to show in $\mathsf{ZF}$ that "the free complete lattice on $3$ generators" doesn't actually exist. The punchline, unsurprisingly, is size: a putative free complete lattice on $3$ generators would surject onto the class of ordinals.
This obstacle isn't a problem however in the context of $\mathsf{NFU}$; on the other hand, recursive constructions (which were utterly unproblematic in $\mathsf{ZF}$) now become more complicated. So the situation is unclear to me: is it consistent with $\mathsf{NFU}$ that there is a free complete lattice on $3$ generators?
I don't have much experience with $\mathsf{NFU}$, so as far as I know it's possible that in in this context the phrase "free complete lattice on $3$ generators" is ambiguous - specifically, I'm a bit worried about the word "free" in this context. To make things precise, I'll use the homomorphism-based notion of freeness, that is, I'm looking for a complete lattice $L$ with three distinguished elements $a,b,c$ such that for every other complete lattice $M$ and distinguished elements $u,v,w\in M$ there is exactly one complete lattice homomorphism $L\rightarrow M$ sending $a$ to $u$, $b$ to $v$, and $c$ to $w$.
I would also be interseted in the answer to the same question for other set theories admitting a universal set, such as $\mathsf{GPK}_\infty^+$. However, my current impression is that $\mathsf{NFU}$ is by far the best-understood such theory, so it seems like a good starting point.
|
https://math.stackexchange.com/questions/4261226
|
[
"logic",
"set-theory",
"lattice-orders",
"universal-algebra",
"alternative-set-theories"
] | 24 | 2021-09-26T17:21:39 |
[] | 0 |
Science
| 0 |
311
|
math
|
Maximal subgroups that force solvability.
|
For which finite groups $M$ is it the case that every finite group $G$ with $M$ as a maximal subgroup solvable?
If $M$ satisfies this condition then $M$ is solvable. Also, if $M$ is abelian then $M$ satisfies this condition. Futhermore, I believe that if $M$ is nilpotent and if all 2-subgroups of $M$ are normal subgroups of $M$ (if Sylow 2-subgroups of $M$ are abelian or quaternion, for example) then $M$ satisfies this condition (proof below).
More specific questions:
1) Is there a non-nilpotent group that satisfies this condition?
2) Which 2-groups satisfy this condition?
Apparently, the dihedral group of order 8 satisfies this condition (see Mikko Korhonen's comment on this post).
Also, if $M\times N$ satisfies this condition then $M$ and $N$ both satisfy this condition.
(This proof is adapted from j.p.'s answer to the linked question).
Let $G$ be minimal such that $G$ is not solvable and such that $G$ contains a maximal subgroup $M$ that is nilpotent and whose 2-subgroups are normal. If $M$ contains a nontrivial normal subgroup $N$ of $G$ then $G/N$ contradicts the minimality of $G$. Thus, $M$ does not contain nontrivial normal subgroups of $G$. In particular, $N_G(P)=M$ for all Sylow $p$-subgroups $P$ of $M$. Then $P$ is a Sylow $p$-subgroup of $N_G(P)$ so $P$ is a Sylow $p$-subgroup of $G$. This shows that $M$ is a Hall subgroup of $G$.
If $P$ is a Sylow $p$-subgroup of $M$ and if $Q$ is a nontrivial normal subgroup of $P$ then $N_G(Q)=M$ which has a normal $p$-complement. For $p=2$, Frobenius' normal $p$-complement theorem gives that $G$ has a normal $p$-complement. For $p\geq3$, Thompson's normal $p$-complement theorem or Glauberman's normal $p$-complement theorem gives that $G$ has a normal $p$-complement (since you only have to consider characteristic $p$-subgroups).
Thus, for each prime $p$ dividing the order of $M$, $G$ has a normal $p$-complement. Then $M$ has a normal complement $N$ in $G$. Since $M$ is solvable but $G$ is not solvable, $N$ is not solvable. In particular, $N$ does not admit a fixed-point-free automorphism of prime order. If $m\in Z(M)$ has prime order then $C_N(m)$ is nontrivial. Then $C_N(m)M$ is a subgroup of $G$ that properly contains $M$ so $C_N(m)M=G$ by the maximality of $M$. Comparing cardinalities shows that $C_N(m)=N$ so $m\in Z(G)$. Then $\langle m\rangle$ is a nontrivial normal subgroup of $G$ contained in $M$ which is a contradiction.
|
https://math.stackexchange.com/questions/2924934
|
[
"group-theory",
"finite-groups",
"solvable-groups"
] | 23 | 2018-09-20T22:48:14 |
[
"No. From the condition $M$ maximal in $G$ you get only an handle on everything between $M_G$ and $G$, so from a group-theoretic point one should always assume $M_G=1$. You cannot control direct factors: You cannot get rid of $H$ in $M\\times H$ maximal in $G\\times H$ without factoring out $M_G$. OK, you could require $M$ not to be the direct product of two non-trivial subgroups, but this cannot be used in this situation. On the contrary with the assumption $M_G=1$, $M$ being direct product (e.g. being product of its $p$-Sylow subgroups for different $p$ if it is nilpotent) is quire useful.",
"Can the condition on $M_G$ be strengthened to get a condition on $M$ that does not depend on $G$?",
"With $M_G:=\\cap_{g\\in G} M^g$, corollary 4.5 in On locally finite groups with locally nilpotent maximal subgroups by B. Bruno and S Schuur (Arch. Math. Vol 61, pp. 215-20) shows that $M$ nilpotent and $M_G$ not a $2$-group implies the solvability of $G$. Without conditions on $M_G$ one can take direct products with the simple group $PSL_2(17)$ (whose $2$-Sylow subgroups isomorphic to $D_{16}$ are maximal subgroups) with other groups for non-solvable constructions.",
"@ThomasBrowning: In \"A condition for the solvability of a finite group\" (1961) Deskins shows that if $G$ has a nilpotent maximal subgroup of nilpotency class $\\leq 2$, then $G$ is solvable. In particular, the dihedral group of order $8$ forces solvability as a maximal subgroup",
"@ThomasBrowning: For your question 1), perhaps the non-trivial semidirect product $C_3 \\rtimes C_4$ is an example.",
"Interesting. I computed maximal subgroups of a number of simple groups a while ago. It looked like all small nontrivial semi-direct products of two cyclic groups occurred with the exception of the dihedral group of order 8. I still don't know whether the dihedral group of order 8 forces solvability.",
"About 2), just to give an example: if $p = 2^n - 1$ is a Mersenne prime ($n > 3$), then $PSL(2,p)$ has the dihedral group of order $2^{n-1}$ as a maximal subgroup.",
"I wonder if there is a reduction to the case where $G$ is almost simple? That is, could we show that $M$ \"forces solvability as a maximal subgroup\" if and only if $M$ is solvable and $M$ is not a maximal subgroup of any almost simple group? If the answer is yes, then your questions are answered by a paper of Li and Zhang from 2011 (Proceedings of the LMS), who classify the solvable maximal subgroups of almost simple groups.",
"A famous example here is when $M$ is a $p$-group for an odd prime $p$."
] | 0 |
Science
| 0 |
312
|
math
|
Conjecture---Identity for Sieve of Eratosthenes collisions.
|
Let
$$\beta(n,k) = \max_{d \leq k}(d|n)$$
$$S(k)= \sum_{n=1}^{k!} \beta(n,k),$$
$\hspace{20mm}$and
$$T(k)=\# \{ ~i\cdot j~~\big|_{i=1}^k \big|_{j=1}^{k!} \}$$
Does $$S(k)=T(k)?$$
See OEIS A126959.
Replace $k!$ in $S,T$ with $\exp (\psi(k) )$, where $\psi(\cdot)$ is second Chebyshev function, to get A101459.
|
https://math.stackexchange.com/questions/886041
|
[
"number-theory"
] | 23 | 2014-08-02T22:24:21 |
[
"@GerryMyerson, One sequence fails at $k=10$ and the other at $k=17$. Yikes!",
"oeis.org/A126959 has been calculated out to $n=36$, so you have verified $S(k)=T(K)$ out to $k=36$?",
"For which values of $k$ have you verified that $S(k)=T(k)$?",
"@frogeyedpeas It usually denotes the number of elements in the set.",
"What is T(n,k)?"
] | 0 |
Science
| 0 |
313
|
math
|
Difficult integral for a marginal distribution
|
I am trying to derive a marginal probability distribution for $y$, and failed, having tried all methods to solve the following integral:
$$
\operatorname{p}\left(y\right) =
\int_0^{1/\sqrt{\,2\pi\,}}\!\!\!\!
\frac{\sqrt{2/\pi}\,\,\mathrm{e}^{-y/\left(2z\right)}}{\sqrt{y\, z}\,\, \sqrt{-\log \left(2\pi\,z^2\right)}}
\,\mathrm{d}z\quad \mbox{with}\quad y>0.
$$
It is easy to verify that
$$\int_0^\infty \int_0^{\frac{1}{\sqrt{2 \pi }}} \frac{\sqrt{\frac{2}{\pi }} e^{-\frac{y}{2 \,z}}}{\sqrt{y\, z} \sqrt{-\log \left(2 \pi \, z^2\right)}} \, \mathrm{d}z \,\mathrm{d}y=1$$
After some work, figured out that, remarkably, we can get the fist moment, $\mathbb{E}(y)=\frac{1}{2 \sqrt{\pi }}$ and $\mathbb{E}(y^2)=\frac{\sqrt{3}}{2 \pi }$ without the density.
|
https://math.stackexchange.com/questions/738393
|
[
"integration",
"probability-theory",
"probability-distributions",
"definite-integrals",
"marginal-distribution"
] | 23 | 2014-04-03T09:14:43 |
[
"Wolfram is also unable to solve it.",
"I find $$\\mathbb{E}(y^n) = \\frac{2^{n/2}\\Gamma\\Big(n+\\frac12\\Big)}{\\pi^{(n+1)/2}\\sqrt{n+1}}.$$",
"Yes, that's true. I'll keep trying a few methods to evaluate it, but if they don't work, this is still a rather nice form I would argue.",
"@Zachary this kind of integrals is notoriously resistant to be evaluated in closed form. I'm pretty sure there isn't much to be done here :()",
"I was able to simplify the integral to $$4(2\\pi)^{-1/4}\\frac{1}{\\sqrt{z}} \\int_0^\\infty \\exp\\left(-ze^{2t^2}\\right) e^{-t^2}\\,dt,$$ where $z=\\sqrt{\\frac{\\pi}{2}}y$.",
"Bounty: Intuition for Conditional Expectation (1. by linking bounty question here, this question gets attention because the bounty question is linked to this question. 2. by linking bounty question here, nero gets to see bounty question and so might answer bounty question)",
"Indeed I am working on the product of 1) Chi-square distribution and 2) distribution of the density of a standardized Gaussian.",
"Looks as if conditioned on the value of $Z$, $Y$ is a Gamma random variable with parameters $\\left(\\frac{1}{2}, \\frac{1}{2\\sqrt{Z}}\\right)$ and $Z$ had marginal pdf of the form $\\frac{1}{\\sqrt{-\\log(2\\pi z^2)}}$ on $(0,1/\\sqrt{2\\pi})$.",
"Sorry it was a typo in the posting that I fixed (an extra z term). I still can't find solution."
] | 0 |
Science
| 0 |
314
|
math
|
Does every topological $n$-manifold ($n>0$) admit an embedding into $\Bbb R^{2n}$? If not, what $n$-manifold does not embed into $\Bbb R^{2n}$?
|
The strong Whitney Embedding Theorem tells us that every smooth n-manifold (n>0) admits a smooth embedding into $\mathbb{R}^{2n}$. Also, every topological $n$-manifold admits an embedding into $\mathbb{R}^{2n+1}$ (Munkres' Topology. Exercise §50.7).
My question now: can this last bound be lowered to $2n$? And if not, which topological $n$-manifold isn't embeddable into $\mathbb{R}^{2n}$? (Whitney's embedding theorem already tells us that such a manifold cannot admit a smooth structure)
|
https://math.stackexchange.com/questions/4059088
|
[
"general-topology",
"manifolds",
"differential-topology"
] | 23 | 2021-03-12T06:20:42 |
[
"@MoisheKohan Thanks",
"It's likely a wasted bounty. The result is known in compact case and, most likely, is not in the literature in the noncompact case. If one wants to find out, the best thing to do is to email Prof. Washington Mio (at Florida State University) and ask him directly. He is a coauthor of the paper Bryant, J. L.; Mio, W., \"Embeddings in generalized manifolds,\" with a result closest to the OP, answering it in the case of compact topological manifolds.",
"@IvinBabu: If you want references to the compact case, see my answer at the linked Mathoverflow question.",
"@JohnSamples Can you provide materials which elaborates the statement you gave- that any compact topological manifold with dimension not a power of $2$ can be embedded into $\\Bbb R^{2n}$",
"@LéoMousseau Yes, beware any people telling you that such an inherently attractice theorem is known and follows readily from known results. So what, they could prove it in 2 or 3 pages and get a paper in the Annals - but they just don't have the time? A bit suspicious . . .",
"@C.F.G Munkres embeds into $\\mathbb R^{2n+1}$. Here we consider $\\mathbb R^{2n}$.",
"I wonder why most of authors call this theorem a \"hard theorem\" while its proof is inside the Munkres' Book as exercise?",
"It is known that smooth (or more generally PL) manifolds embed into $\\mathbb R^{2n}$. See ams.org/journals/tran/1971-157-00/S0002-9947-1971-0278314-4/…",
"@JohnSamples It seems strange that such a basic result (not basic in the sense of difficulty of proving) isn't written down anywhere...",
"I don't think it's been formally proven but it's generally accepted as true. Any counterexample would have to be non-compact and have dimension $2^k$.",
"@AlessandroCodenotti I feel the same way. I dont see how this follows from any of the statements made in those papers. The commenters also didn't seem too confident in their ideas of proving this.",
"@AlessandroCodenotti I looked through the papers suggested by the first link in Sergey Melikhov's comment to Andy Putnam's answer and I was not able to find the desired statement. (The second link in Melikhov's comment seems to be broken.) I guess the situation is that the embedding into $\\mathbb{R}^{2n}$ somehow follows from the theorems in the Bryant-Mio and Johnston papers, but this isn't obvious to me as a non-expert.",
"mathoverflow.net/questions/34658/… whoops I copied the wrong link earlier. The answer is positive by comments here",
"The answer to your questions seems to be positive by various comments spread here math.stackexchange.com/questions/4059088/…",
"@AlessandroCodenotti exactly, every knot is homeomorphic to S^1 and is therefore embeddable in R^2.",
"@Henno aren't knots just $S^1$ as topological manifolds though?",
"There are knots in $\\Bbb R^3$ that don’t have a planar embedding."
] | 0 |
Science
| 0 |
315
|
math
|
Does the sum of reciprocals of all prime-prefix-free numbers converge?
|
Call a positive integer $n$ prime-prefix-free if for all $k \ge 1$, $\lfloor \frac{n}{2^k} \rfloor$ is not an odd prime. (Odd because otherwise the property is trivial, as every integer greater than $3$ has $10_2=2$ or $11_2=3$ as a proper binary prefix.)
Does the sum of reciprocals of all prime-prefix-free numbers converge?
I know that the sum of reciprocals of all prime prime-prefix-free numbers converges, using the Kraft-McMillan inequality and the fact that their binary representations form a prefix-free set.
But this doesn't seem like much of a starting point for the whole problem, since a number being prime-prefix-free isn't related to whether its factors are (except when the other factor is a power of $2$). I'm willing to assume Cramér's conjecture if that helps, limiting how many bits must be appended to make a number prime.
|
https://math.stackexchange.com/questions/2288648
|
[
"number-theory",
"convergence-divergence",
"prime-numbers"
] | 22 | 2017-05-19T22:15:08 |
[
"How many prime-prefix-free numbers less than $n$ are there asymptotically?",
"The sum up to 10^11 is 0.75523... and I doubt more than 2-3 of these digits carry over to the infinite sum.",
"Mirko, in that case none of the values would be integers for odd $n$, so every odd number would be prime-prefix-free and the sum of reciprocals would diverge.",
"I wonder if you know the answer for the version of your question when, in the definition of prime-prefix-free, you replace $\\lfloor\\frac{n}{2^k}\\rfloor$ with $\\frac{n}{2^k}$. It looks like it would be easier to handle that version(modified one), though I do not have an answer for either. (I would guess divergent, for what seems to me the easier version.) Nice question!",
"Made a stupid mistake in my \"answer\". Estimating the sum of reciprocals of prime-prefix numbers not exceeding $n$ by $$\\sum_{\\text{odd prime }p\\le n} \\sum_{k\\ge 0: 2^k p\\le n} \\sum_{m=2^k p}^{\\big((p+1)2^k-1\\big)\\wedge\\, n}\\frac{1}{m}$$ gives upper estimate of $\\log_2 n \\times \\log\\log n$, which is even bigger than the sum of reciprocals of all numbers. This is because there are too many prime-prefix primes.",
"I did when I couldn't find it either, it's awaiting approval. oeis.org/draft/A287117",
"So your sequence begins $1,2,3,4,5,8,9,16,17,18,19,32,33,36,37,\\dots$ ? I'm surprised to not find this in the OEIS - perhaps you could add it?",
"For a random integer $n$, the probability of $n, n/2, n/4, \\cdots$ are all not being a odd prime is approximately $\\prod (1-\\frac{1}{\\ln n-k\\ln 2})$. Also, $\\prod (1-\\frac{1}{\\ln n-k\\ln 2})\\approx\\prod (1-\\frac{\\ln2}{\\ln n-k\\ln 2})=\\frac{\\ln n-(k+1)\\ln 2}{\\ln n}=O(\\frac{1}{\\ln n})$ (telescoping product, and $\\ln n-(k+1)\\ln 2$ must be smaller than $\\ln 2$). Therefore the sum is \"approximately\" $\\sum\\frac{1}{n \\ln n}$ and this sum diverges. Of course, this is not a proof..."
] | 0 |
Science
| 0 |
316
|
math
|
Found an recursive identity (involving a continued fraction) for which some simplification is needed.
|
This is my second question in this forum; as I previously explained it, I am a "hobbyst" mathematician and not a professional one; I apologize by advance if something is wrong in my question.
I enjoy doing numerical computations on my leisure time, and at the end of year 2015, I was working on some personal routines related to the world of ISC. With the help of these pieces of code, I detected algorithmically several identities; one was already described here and solved (see this question regarding a continued fraction for tanh). After having spent some time on another one a year ago, I would like to get some help for simplifying what I found. This new continued fraction is:
$$
\mathcal{K}\left(k,x\right)=\operatorname*{K}_{n=1}^{\infty} \frac{
\left(n+1\right)\left(\left(k-1\right)x-n\right)k}{\left(n+1\right)\left(k+1\right)} \tag{1} \label{1}
$$
The previous notation is the one I use; I find it convenient and it can be found for instance in Continued Fractions with Applications by Lorentzen & Waadeland, but I know that some people don't like it; it has to be read the following way:
$$
a_0 + \operatorname*{K}_{n=1}^{\infty} \frac{b_n}{a_n} = a_0 + \cfrac{b_1}{a_1 + \cfrac{b_2}{a_2 + \cfrac{b_3}{a_3 + \dotsb}}}
$$
I found many partial formulas, some of them being very nice, involving hypergeometric functions or the Lerch Phi function. But of course I was rather interested by a fully general identity. I finally found something but that would need to be simplified now and this is what I am asking here. I would be very happy to finally see a nice identity for this continued fraction. Maybe such an identity could be published somewhere (if it happens to be interesting) and I would happily leave it to anyone who would have taken some time on it.
Is it true that:
$$\mathcal{K}\left(k,x\right)=
\frac{\Gamma\left(x+1\right)\Gamma\big((k-1)x\big)}{\Gamma\left(kx\right)}
\times\mathcal{L}\left(k,x\right)$$
where,
$$\mathcal{L}\left(k,x\right)=\frac{k^{kx}}{2\left(k-1\right)^{\left(k-1\right)x-1}}+\sum_{i=0}^\infty
\big(
\alpha\;\mathcal{K}(k, x+i)
-
\beta\;\mathcal{K}(k, x+i+1)
\big)$$
and,
$$\alpha=\frac{\left(k-1\right)^{\left(k-1\right)i}\Gamma\left(k\left(i+x\right)\right)}{k^{ki}\,\Gamma\left(1+i+x\right)\Gamma\left(\left(k-1\right)\left(x+i\right)\right)}\\
\beta=\frac{\left(k-1\right)^{\left(k-1\right)i+k}\Gamma\left(k\left(1+i+x\right)\right)}{k^{k\left(i+1\right)}\Gamma\left(1+i+x\right)\Gamma\left(\left(k-1\right)\left(x+i\right)+k\right)}$$
Since I work with empirical computations, I have to say that this formula is rather difficult to check because convergence is rather slow, but it is a result I managed to get by gathering various other materials.
In case someone would wonder whether it is worth spending some time on it or not, I can provide some partial results like nice special values:
$$
\mathcal{K}\left(2, 1/2\right)\;=\;\pi/2-2
\mathrm{,}
\qquad
\mathcal{K}\left(4, 1/2\right)\;=\;4\pi\sqrt{3}/9-2
\qquad\mathrm{etc.}
$$
The case $k$ being an integer
When $k$ is an integer, $k\geq3$, the continued fraction follows a functional identity:
$$
\mathcal{K}\left(k,x\right)\;=\;
% \displaystyle\frac{k^{kx} \Gamma{\left(x+1\right)}\Gamma{\left(\left(k-1\right)x\right)} }{2\left(k-1\right)^{\left(k-1\right)x-1}\Gamma{\left(kx\right)}} + \mathcal{K}\left(k, x+1\right)g_k\left(x\right)
g_k\left(x\right) + \left(\displaystyle\frac{k-1}{k}\right)^k \displaystyle\frac{\Gamma{\left(\left(k-1\right)x\right)}\Gamma{\left(k\left(x+1\right)\right)}}{\Gamma{\left(\left(k-1\right)x+k\right)}\Gamma{\left(kx\right)}}\mathcal{K}\left(k, x+1\right)
$$
where $g$ is a sequence of rational functions. Unfortunately I couldn't find a general form for it but I could easely compute about 30 of them with the help of numerical pieces of software. The best I could do here was to build a triangle of integer coefficients needed for building a given $g_k$ function.
Let's call $m_{(a,b)}$ the coefficient in the $a^\textrm{th}$ row and $b^\textrm{th}$ column from the following triangle:
$$
\begin{array}{rrrr@{\qquad}l}
-8&&&&\textrm{for $k=3$}\\
-20&2&&&\textrm{for $k=4$}\\
-40&12&-24&&\textrm{for $k=5$}\\
-70&42&-202&624&\textrm{for $k=6$}\\
\textrm{etc.}
\end{array}
$$
(I can provide about 30 rows of these coefficients, see below); then,
$$
g_k\left(x\right)\;=\;\displaystyle\sum_{i=1}^{k-2}\displaystyle\frac{m_{(k-2,i)}x
\Gamma{\left(\left(k-1\right)x\right)}}{\left(x+1\right)k^i\Gamma{\left(\left(k-1\right)x+i\right)}}
$$
A direct formula for $\mathcal{K}\left(k,x\right)$ involves the same rational function $g_k$:
\begin{equation}
\begin{array}{lcl}
\mathcal{K}\left(k,x\right)&=&
\displaystyle\frac{ \Gamma{\left(x+1\right)}\Gamma{\left(\left(k-1\right)x\right)} }{ \Gamma{\left(kx\right)} }\\[24pt]
&&\times\quad\left( \displaystyle\frac{k^{kx} }{2\left(k-1\right)^{\left(k-1\right)x-1}}
+\displaystyle\sum_{i=0}^\infty
\displaystyle\frac{ \left(k-1\right)^{\left(k-1\right)i}
\Gamma{\left(k\left(i+x\right)\right)}}{k^{ki}\Gamma{\left(1+i+x\right)}\Gamma{\left(\left(k-1\right)\left(x+i\right)\right)}}g_k\left(x+i\right)
\right)
\end{array}
\label{directform}
\end{equation}
The first formula I gave in my question was found by substituting $g$ from these last expressions. Of course, finding some formula for the $m$ coefficients would be nice, but I couldn't figure out any despite the time I spent on it.
It looks also like the infinite sum above can be turned into a finite sum of $k-2$ hypergeometric functions as a direct formula for $\mathcal{K}\left(k,x\right)$. The initial cases are the simplest, like for $k=3$:
$$
\begin{array}{lcl}
\displaystyle\mathcal{K}\left(3,x\right)&=&
\displaystyle\operatorname*{K}_{n=1}^{\infty}\displaystyle\frac{\left(n+1\right)\left(6x-3n\right)}{4n+4}\\
&=&
\cfrac{12x-6}{8+\cfrac{18x-18}{12+\cfrac{24x-36}{16+\cfrac{30x-60}{20+\ddots}}}}\\
&=&
\displaystyle\frac{27^x\, \Gamma\left(x+1\right)\Gamma\left(2x\right)}
{4^x\, \Gamma\left(3x\right)}
-
\displaystyle\frac{4\,\displaystyle{}_3F_2\left(1,x+\frac{1}{3},x+\frac{2}{3};\;x+\frac{1}{2},x+2;\;1\right)}{ 3\left(x+1\right) }
\end{array}
$$
The general case being:
$$
\begin{array}{lcl}
\mathcal{K}\left(k,x\right)&=&
\displaystyle\frac{k^{kx} \Gamma{\left(x+1\right)}\Gamma{\left(\left(k-1\right)x\right)} }{2\left(k-1\right)^{\left(k-1\right)x-1}\Gamma{\left(kx\right)}}\\[24pt]
&+&
\displaystyle\sum_{i=1}^{k-2}
\displaystyle\frac{m_{(k-2,i)} x
\;\;\displaystyle
{}_{k+1}F_{k}
\left(
\begin{array}{l}
1,x+\frac{1}{k},x+\frac{2}{k},\dots,x+\frac{k-1}{k},x+1\\[4pt]
x+\frac{i}{k-1}, x+\frac{i+1}{k-1}, \dots,x+\frac{i+k-2}{k-1},x+2
\end{array}
{1}
\right)
}
{k^i\left(x+1\right) \Gamma{\left(\left(k-1\right)x+i \right)}/\Gamma{\left(\left(k-1\right)x\right)} }
\end{array}
$$
where obviously the $x+1$ term can always be cancelled (since one of the $x+(i+\dots)/(k-1)$ terms is always equal to $x+1$ leading to a ${}_{k}F_{k-1}$ function.
Unfortunately, I couldn't figure out how to make some substitution between this last formula with $m$ coefficients and previous formulae above involving the $g$ function.
The case $kx$ being an integer
In order to study consecutive integer values of the~$kx$ product, I will now use another notation:
$$
\mathcal{K}'\left(k', x'\right)\;=\;\mathcal{K}\left(x', k'/x'\right)
$$
Starting from here, $k'$ is assumed to be an integer (according to the previous notation, it means that~$kx$ is an integer).
New identities, involving different expressions can be found (leading to different kinds of special values when possible):
$$
\begin{array}{lcl}
\mathcal{K}'\left(k',x'\right)&=&
h_{k'}\left(x'\right) -
\left(\displaystyle\frac{x'}{x'-1}\right)^{k'-1}
\displaystyle\frac
{k'\;\Phi\left(-(x'-1)^{-1}, 1, k'/x'\right)-x'+1}
{k'/x'\; \beta\left(k', -k'/x'\right)}\\[24pt]
&=& h_{k'}\left(x'\right) -
\left(\displaystyle\frac{x'}{x'-1}\right)^{k'-1}
\displaystyle\frac{
\displaystyle\int_{t=0}^1
\left(
\displaystyle\frac{\left(1-t\right)\left(x'-1\right)}{x'-1+t}
\right)^{k'/x'}\textrm{d}t}
{k'/x'\; \beta\left(k', -k'/x'\right)}\\[24pt]
&=& h_{k'}\left(x'\right) -
\left(\displaystyle\frac{x'}{x'-1}\right)^{k'-1}
\;
\displaystyle\frac
{
\displaystyle{}_{2}F_{1}\left(1,k'/x'\;; 1+k'/x'\;; \left(1-x'\right)^{-1}\right) x'
-x' + 1 }
{k'/x'\; \beta\left(k', -k'/x'\right)}\\[24pt]
\end{array}
$$
where $h$ is (again) a sequence of rational functions. Let's call $m'$ the sequence of following polynomial functions (I computed about 20 of them):
$$
\begin{array}{l@{\qquad}l}
1,&\textrm{(for $k'=3$)}\\
-6 + 4x,&\textrm{(for $k'=4$)}\\
41 - 53x + 18x^2,&\textrm{(for $k'=5$)}\\
-348 + 648x - 420x^2 + 96x^3,&\textrm{(for $k'=6$)}\\
\textrm{etc.}
\end{array}
$$
(I have about 20 rows like that, see below); then,
$$
\left\{\begin{array}{l@{\qquad}l}
h_1\left(x'\right)\;=\;-1&\textrm{if $k'=1$}\\[12pt]
h_2\left(x'\right)\;=\;0&\textrm{if $k'=2$}\\[12pt]
h_{k'}\left(x'\right)\;=\;
\displaystyle\frac{ m'_{k'-2} \left(k'\left(x'-1\right)-x'\right)\left(x'-1\right) }{ \left(k'-1\right)!\left(x'-1\right)^{k'-1} }
&\textrm{if $k'\geq3$}
\end{array}\right.
$$
Again, finding some formula for these $m'$ coefficients would lead to a beautiful formula for $\mathcal{K}$.
Materials (data)
Finding a formula for one of these two triangles would lead to a direct formula; I put my data below in case someone would have a glance at them. I used the Mathematica format but it is very easy to convert it to anything else.
Two pieces of data are attaches:
the triangle of $m$ coefficients;
the triangle of $m'$ polynomials.
First, the triangle of $m$ coefficients:
m = {
{-8},
{-20,2},
{-40,12,-24},
{-70,42,-202,624},
{-112,112,-944,6800,-28160},
{-168,252,-3240,39990,-378180,1956240},
{-240,504,-9120,168780,-2691500,31299660,-193818240},
{-330,924,-22308,573210,-13533950,262134768,-3604679456,25969798400},
{-440,1584,-49104,1665048,-54028800,1533955752,-34784795304,551021454648,-4524877873152},
{-572,2574,-99528,4294290,-182338520,7056003570,-232622918920,6027044680440,-107934603537600,994719833856000},
{-728,4004,-188760,10081500,-540836660,27196564920,-1213669081240,45402131767300,-1320731548020500,26362209822109700,-269367401834854400},
{-910,6006,-338910,21926190,-1447260750,91405905570,-5269521952170,265270592109600,-11076506267112000,357029036918928000,-7854973969921056000,88120488036962304000},
{-1120,8736,-581152,44753280,-3559649600,275205604224,-19825712025728,1282796713117920,-71715923149544960,3301368476366175072,-116699890447511246304,2804668390029759121440,-34267109445760293273600},
{-1360,12376,-958256,86572772,-8159187400,756765661176,-66440724681072,5348371530601974,-382626987630417516,23479203132873399912,-1180109553700743730064,45368332310341259611392,-1182254848944902971766528,15625389962188145791748096},
{-1632,17136,-1527552,159942120,-17614027080,1928217475800,-202304078444160,19770089799495420,-1752820980572484900,137106641163083684400,-9150469205270049657000,498255795785126793714300,-20689280727893354516039100,580954533672149606803333500,-8258153843323806482092032000},
{-1938,23256,-2364360,283936380,-36111651940,4603291360920,-568072401325800,66113606126531250,-7091616708136156350,685093642937381277600,-58084005963489507601600,4185232111643968748563200,-245300606116021459523520000,10937978741665549577203200000,-329201103515195643674342400000,5008018989272579747902955520000},
{-2280, 31008, -3565920, 486748080, -70778344000, 10387428137520, -1488100140390640, 203083381548077800, -25863156057665295600, 3013571052233142231600, -314587396914476897195600, 28707485207091210763643400, -2219712979095004695654352000, 139280212909636939833667975000, -6636253210525016113608507935000, 213097407927242925450657097805000, -3454359206881951401298519654400000},
{-2660, 40698, -5255856, 809056860, -133342927200, 22312129321188, -3669893137886976, 579814786246525830, -86348985070472364240, 11912853180399067114218, -1495810636771876664855616, 167605806446114219615512080, -16367656617234522958801322880, 1351158196468373411398992679968, -90343714080278210268675472690176, 4580006947756999980100721131530240, -156282112623791407868541623144448000, 2689249605403395016547015123312640000},
{-3080, 52668, -7589208, 1308328296, -242549291800, 45885645345792, -8583630937867664, 1553160300088906908, -267115992196423617852, 42987425869754619349668, -6375040043352284259869056, 857073372770354425645988028, -102516882515527648341577456052, 10661831289669551213862356652936, -935559064657406376990369478217592, 66392317932445459737857422287168372, -3567745082228659013145697404664186980, 128910715024676759785064905466962718772, -2346816894979813166169796326498705604608},
{-3542, 67298, -10758066, 2064221940, -427579512340, 90781666964580, -19156534075381220, 3933373867476907490, -773001449364837212650, 143337480955932385601050, -24740379167353794462482250, 3919619596881620663698352400, -561261767174736596981182684800, 71335052610001221107048830848000, -7868282527494059201650532567232000, 731130238635173198053153940388864000, -54874347623661338563124599643795456000, 3115425378725886177371208787427500032000, -118823016992948260463398185243235328000000, 2281667516033630958272834061325819904000000},
{-4048, 85008, -14997840, 3185310480, -732817955520, 173481958077120, -40999952833277760, 9476856735932804400, -2109071440724185828800, 445975726476730199677200, -88512496883173204230589200, 16287555278375689955554908000, -2742762926014265565760465116000, 416464503552989975128829015232000, -56022670706540265707212091974752000, 6530157038781041616772104173250906000, -640432883214307884870393296312193408000, 50678900925449456482492390925833162170000, -3030888088511779915449136680430563767970000, 121680563022029894895609769265080802797710000, -2457881871574620592645101983798188100812800000},
{-4600, 106260, -20594200, 4817335050, -1224368027500, 321314420301000, -84511921344096400, 21837002485484744250, -5460560343151268855500, 1305129863913508752619500, -294830327010826293740053000, 62266523988286549276643448150, -12155836300655550173555485100500, 2166552581042493037698711451044000, -347528815311777085029924893736724000, 49310886906769450018463698053718956000, -6055094240863690023007706913404687496000, 624931908053334691607080995207899763360000, -51995246297017813506321403783934541271680000, 3267058354801244958789636823999905461614080000, -137714123890197288080314052933290069397790720000, 2919082688078782495755638635126117397822177280000},
{-5200, 131560, -27890720, 7153246100, -1998828604500, 578492714221380, -168381395188292160, 48336833335182707700, -13488618129875116559100, 3616198939768951478832120, -921761072167762889584562040, 221195726134157418021734756400, -49475332499912242476184032601200, 10205195620733840007949834678752240, -1918172758706697630190525216812587520, 323985355991420046527323571346102988380, -48344306336853827546608471043597263180580, 6236410605945196309646575950593070996242112, -675574363650914146210117776862820284274422024, 58952707582889583114849809235709300486236865900, -3882542732229399143933348322987701786016750303500, 171441026104474223322555734221353597751602880467500, -3804948188770142619704669041367337098336518799360000},
{-5850, 161460, -37297260, 10445304870, -3194948279250, 1014976460259720, -325284627793625640, 103172770888592820030, -31935449045956426109370, 9539216762922608385767220, -2723057324780894154211144140, 736160112029528961953639831370, -186796816819349796321290668269630, 44075061969736070597572230035573520, -9572302178958218895730236218841324000, 1891505089131626077283496419557169149440, -335445207934760792427195571281784372615680, 52500216486691388374580499101901862088712192, -7097170407703652020998162722033686958068359168, 805065643420847023775013665323598164107193712640, -73517194182833158880223919841841477149672840232960, 5063910198045757853788580192610834249566744036769792, -233753499049244474320664989973779541903817843793723392, 5421019686584291940290446961296577366550977042379177984},
{-6552, 196560, -49299120, 15019547400, -5008903972800, 1739242914960600, -610937879343586200, 213044845306016606400, -72754198545840383754000, 24070115922338692488744000, -7644318575328042577024440000, 2310967474707680781916704026400, -659643825387695670545486563900800, 176310960419571195542552313305220000, -43737562948690701530900945112487860000, 9971569996183917023985409450427829525000, -2065816103048706564571284238051613500970000, 383694616243443437568263013812039800791990000, -62837694530317608096687694020291847950887850000, 8882027287256940966231272654559742520461647675000, -1052798325643449998635040396582752781540903534400000, 100403132568689053077039174326824794886423800369525000, -7219010863790387928075409606725932651241502240697925000, 347694912384706890967941398378351924881505314746114375000, -8410211480678023320925498721110295938761779872530432000000},
{-7308, 237510, -64467000, 21292941150, -7714097921400, 2916404269698150, -1118190692314588200, 426802089352299911250, -160027260019840672173000, 58331250782529140212904250, -20490396263678036008373775000, 6882240483264899117171621464650, -2193778617416596625925140519889000, 658701474719992054755835904812589250, -184849219993358885493062726683645011000, 48071268295949005824672264507034120713000, -11474627010559184001964832102040305176064000, 2486300080568850616517044780683050329279500000, -482557829071841856097474441995986751588159600000, 82519812703910109373696671538744758363463431600000, -12171485983756715937697088833853438913213231139200000, 1504622344753359530620585743898465267509637578086400000, -149578514098722961283327295745791298648255342896128000000, 11206095897313941255176473673738843538864273909178368000000, -562168844769302825642413825011269789161124218168180736000000, 14158685308482113187947385422334786985368814916547903488000000},
{-8120, 285012, -83467800, 29793593700, -11686535347500, 4793533613422200, -1998556890132738600, 831586576427972253000, -340848228875785588089000, 136238023124447243505363000, -52660395629469710899852464000, 19538961040560845384445253280400, -6910955344258841823610759577910000, 2314356603026298502745751283117512000, -728675543038928155386332670093531348000, 214094057758291962937552370682805572874500, -58219061862163874792551203092332714335562500, 14516218849791911496733341909570194788693657500, -3282604618552285952731079609822870039677157280000, 664411186688887595279287675566612107186757131137500, -118409734996254908439089485033112187166889903398912500, 18191629115197225627312502465880976532595946618885575000, -2341224306602995570500391595079223927042601832325548375000, 242207299534450314667366061440535601337557549950222099062500, -18876082685899484114785412705333370447849929199143588260312500, 984735360414688845599685871865811518568346909249116274589062500, -25783538810629517021266249543138106125941453578155830804480000000},
{-8990, 339822, -107076294, 41184403650, -17437036139250, 7734593443379670, -3494544596518289790, 1579284525442183941150, -704824443778733315833950, 307603458576783505012150890, -130224815626952571544054365330, 53104886131252741135008778869750, -20725045171508268666798696739515750, 7692149381874530736969377052576265170, -2697951867406866051352410906078675593130, 888312186434005905720786992903999283113280, -272592942573130969721591283451860861263314880, 77338601225244978709744671633189447473744656512, -20101127086406857123404360233486723068548779401984, 4734712312004040060142996428231341452704698192056320, -997560027458845642233086777856612791698591648386293760, 184958250344358621036304852979454062719166092717119602688, -29548178532481410695731393820614988207097594428678936395776, 3952663598502220791832289869795663784663381872742938194739200, -424872593639734370119419756220910340050772936517217787445248000, 34392492457805374774120867589709950335110603672365392763289600000, -1863044156298137915554810903420018148506651555374947876496998400000, 50638772787167674049719466457371229923270837855299620820746240000000}
};
Then the triangle of $m'$ polynomials:
mm = {
1, -6 + 4*x, 41 - 53*x + 18*x^2, -348 + 648*x - 420*x^2 + 96*x^3,
3669 - 8734*x + 8067*x^2 - 3482*x^3 + 600*x^4,
-47248 + 135328*x - 158672*x^2 + 96800*x^3 - 31248*x^4 + 4320*x^5,
727641 - 2423511*x + 3405267*x^2 - 2622141*x^3 + 1188252*x^4 - 305748*x^5 +
35280*x^6, -13122720 + 49768960*x - 81172560*x^2 + 74655760*x^3 - 42491760*x^4 +
15258160*x^5 - 3258720*x^6 + 322560*x^7,
271959293 - 1157733608*x + 2149311530*x^2 - 2291218376*x^3 + 1553100917*x^4 -
697389632*x^5 + 206763300*x^6 - 37696464*x^7 + 3265920*x^8,
-6373686528 + 30125806848*x - 62800700544*x^2 + 76183473024*x^3 -
59781151872*x^4 + 31886134272*x^5 - 11774336256*x^6 + 2965752576*x^7 -
471208320*x^8 + 36288000*x^9, 166695335769 - 867080392969*x + 2008379540976*x^2 -
2736817207770*x^3 + 2443396898805*x^4 - 1507011320601*x^5 + 659441858394*x^6 -
206111203940*x^7 + 45043779816*x^8 - 6336456480*x^9 + 439084800*x^10,
-4812534974464 + 27341393304064*x - 69753592234368*x^2 + 105687047754624*x^3 -
106030998529344*x^4 + 74392108414848*x^5 - 37603296588032*x^6 +
13896748361216*x^7 - 3755526664512*x^8 + 723700309248*x^9 - 91276174080*x^10 +
5748019200*x^11, 151999996277925 - 937041564265650*x + 2613327954360525*x^2 -
4364620345415250*x^3 + 4871696279607675*x^4 - 3842208959921550*x^5 +
2208928948800975*x^6 - 942087609901350*x^7 + 300405312044100*x^8 -
71352461709000*x^9 + 12280661164800*x^10 - 1402935292800*x^11 + 80951270400*x^12,
-5212950320375808 + 34671923430780928*x - 105013701846865920*x^2 +
191879918776057856*x^3 - 236246229569439744*x^4 + 207409229427830784*x^5 -
134079476009502720*x^6 + 65028411548069888*x^7 - 23908234488465408*x^8 +
6687356722905088*x^9 - 1414323490805760*x^10 + 219693856813056*x^11 -
22925711370240*x^12 + 1220496076800*x^13,
192908434730267801 - 1377309563570454971*x + 4504240832008823353*x^2 -
8944731476737727263*x^3 + 12057054300505697583*x^4 - 11683650400191522273*x^5 +
8411790424807054099*x^6 - 4588515109996305829*x^7 + 1918116031712679596*x^8 -
618064823563159856*x^9 + 153647212030630848*x^10 - 29242401180913008*x^11 +
4135229005230720*x^12 - 396997001452800*x^13 + 19615115520000*x^14,
-7661276413060165632 + 58455665081940344832*x - 205362993128521586688*x^2 +
440656568659067627520*x^3 - 646000731200138715648*x^4 +
685780191107302542336*x^5 - 545270893574020280064*x^6 +
331406006965726821120*x^7 - 155847180496082807808*x^8 +
57087389250576640512*x^9 - 16335131671115336448*x^10 + 3647848903953212160*x^11 -
630293068833472512*x^12 + 81737176968299520*x^13 - 7263281191219200*x^14 +
334764638208000*x^15, 325015466875658755821 - 2639668188726987561436*x +
9917145842014784419212*x^2 - 22874961160535389523056*x^3 +
36258193335964278154110*x^4 - 41887476949175919553816*x^5 +
36506794561565447125836*x^6 - 24516370991512685833088*x^7 +
12850659286368069730317*x^8 - 5296394247470924322860*x^9 +
1722523200312064838592*x^10 - 442409275011858790256*x^11 +
89540582470596717552*x^12 - 14150517433376508288*x^13 +
1693379120667874560*x^14 - 140015823275059200*x^15 + 6046686277632000*x^16,
-14668500851872718848000 + 126360196032621236224000*x -
505641446052759169024000*x^2 + 1248039501127422625792000*x^3 -
2127811079963561138176000*x^4 + 2659365515136882884608000*x^5 -
2523680829463448281088000*x^6 + 1858664349808259201024000*x^7 -
1076955696528752952320000*x^8 + 494913795536023414784000*x^9 -
181130674396834734080000*x^10 + 52865257295200467968000*x^11 -
12296348611822927872000*x^12 + 2272553630125056000000*x^13 -
330581514194555904000*x^14 + 36704049561050112000*x^15 -
2836877949868032000*x^16 + 115242726703104000*x^17
}
|
https://math.stackexchange.com/questions/2097510
|
[
"number-theory",
"gamma-function",
"continued-fractions",
"hypergeometric-function"
] | 22 | 2017-01-14T06:53:32 |
[
"@TitoPiezasIII I will do the same thing for other formulas I have; they may have some interest for understanding how the function behave. Thank you again for your interest. Regards.",
"@ThomasBaruchel: I made some formatting changes to your general identity. Pls check. If you have a long expression, the way to do it is to break it up. Kindly also include your formula for integer $k$ and if it is too long for the screen, you can format it like what I did with $\\mathcal{K}(k,x)$.",
"@Nicco You are right ; I will fix it immediately. Regards."
] | 0 |
Science
| 0 |
317
|
math
|
A question connected with the decomposition of a functional on $C(X)$ on Riesz and Banach functionals
|
Let $X$ be a metric space and let $C(X)$ be a family of all bounded and continuous functions from $X$ in $\mathbb{R}$.
We call a positive linear functional $\varphi: C(X) \rightarrow \mathbb{R}$ the functional of Riesz if there is a borel measure $\mu$ on $X$, such that $\varphi(f)=\int_X f \,d\mu$, for $f\in C(X)$.
We call a positive linear functional $\varphi: C(X) \rightarrow \mathbb{R}$ the functional of Banach if for each borel measure $\nu$ on $X$ the condition:$\int_X f d\nu\leq \varphi(f)$, for $f\in C(X)$ - implies that $\nu$ is trivial.
There is a well known theorem :
Let $X$ be a polish space. Then, for each positive linear functional $\varphi: C(X) \rightarrow \mathbb{R}$ there is a unique couple $(\varphi_0,\varphi_*)$ of positive linear functionals defined on $C(X)$, such that $\varphi_0$ is the functional of Riesz, $\varphi_*$ is the functional of Banach and $\varphi=\varphi_*+\varphi_0$. Moreover, the measure $\mu$ related to $\varphi_0$ is defined by: $$\mu(K)=\inf\{\varphi(f): f\in C(X), 1_X\geq f \geq 1_K\},$$ for each compact set $K\subset X$.
More pecisely, for the proof, we define:
$$\varphi_{\delta}(f)=\sup\{\varphi(h): \mbox{ supp}\,h\in N(\delta), 0\leq h\leq f\},$$ for $\delta>0$, $$\varphi_{0}(f)=\lim\limits_{\delta \to 0^+}\varphi_{\delta}(f),$$ for $f\in C(X), f\geq 0$, and $$\varphi_{0}(f)=\varphi_{0}(f^+)-\varphi_{0}(f^-),$$ for $f \in C(X)$, where $N(\delta)$ is a family of sets that possess a covering composed of finite number of open balls with a radius equal to $\delta$.
My question concerns the truth of the following sentence: Let $X$ be a $\sigma$-compact and polish space. Assume that $\varphi^x:C(X) \rightarrow \mathbb{R}$ is a positive linear functional, for all $x \in X$ and let $((\varphi^x)_0,(\varphi^x)_*)$ be a couple of Banach-Riesz functionals, for $x \in X$. If the mapping $X \ni x \mapsto \varphi^x(f)$ is continuous for all $f \in C(X)$ and $\varphi^x(1_X)=1$, for $x \in X$, then mapping $X \ni x \mapsto (\varphi^x)_0(f)$ is continuous for all $f \in C(X)$ (or may be for only $f \in C_c(X)$).
I was able to proof only that the mapping $X \ni x \mapsto (\varphi^x)_0(f)$ is upper semi-continuous, for $f\in C_c(X)$.
|
https://math.stackexchange.com/questions/54377
|
[
"real-analysis",
"probability-theory",
"measure-theory"
] | 22 | 2011-07-28T15:55:48 |
[
"Actually, I am trying to use this theorem to generalize a result in the paper of Meyn and Tweedie \"Markov Chains and Stochastic Stability associated with Markov e-chains\" on the polish spaces. Sorry I didn't gave this kind of informations. I will be very grateful for your help.",
"It's getting too late for me to think straight, maybe I'll get back to your problem tomorrow. I'm pretty sure that you're trying to adapt the paper (specifically Appendix A) of Lasota-Szarek Lower bound technique in the theory of a stochastic differential equation to your more general situation? It would be extremely helpful if you gave the reference you're working with next time you ask a question.",
"I can afford to assume that X is also $\\sigma$-compact. Let $\\mu_x$ be a measure related to Riesz functional $(\\varphi^x)_0$. We have: $(\\varphi^x)_0(f)=\\mu_x(X)-(\\varphi^x)_0(1-f)=\\lim\\limits_{n\\to\\infty}\\mu_x(K_n)-(\\varphi^x)_0(1-f)$, so i think it suffice to prove that $x \\mapsto \\mu_x(K)$ is lower semi-continous, for compact sets $K$, but i dont know if that is truth.",
"Yes, exactly, I assumed that $f\\in C_c(X)$ and used the fact, that for each $\\delta>0$ and a compact set $K$ (here I set K=supp f) there is a function $f_\\delta\\in N(\\delta)$, such that $1_K\\leq f_\\delta$ and the equality $(\\varphi^x)_\\delta(f_\\delta)=\\varphi^x(f_\\delta)$.",
"And I assume that you use compact support of $f$ to see that $\\phi_{\\delta}$ is also upper smicontinuous (after all, you're then working on the compact space given by the support of $f$)?",
"So, at the first, I noticed that the function $x \\mapsto \\varphi_\\delta (f)$ is lower semi-continous (as you said), next, i proved that the same function $x \\mapsto \\varphi_\\delta (f)$ is also upper semi-continous, so it is continous. Finally, $x \\mapsto \\varphi_0 (f)$ is upper semicontinous as monotonically decreasing limit of upper semi-continuous (even continous) functions.",
"Now let us get rid of all unnecessary ballast. Fix $f \\geq 0$. Then you're taking the supremum over the continuous functions $x \\mapsto \\phi^x (f)$. In my world this gives a lower semicontinuous function in general. Now you're taking a monotonically decreasing limit of lower semi-continuous functions to get $\\phi_{0}(f)$. So why on earth is that upper semicontinuous? Also, where did you get that from? The theorem is definitely not well-known to me.",
"exactly yes, this is a construction from the proof of the theorem that i gave",
"I'm not sure if I understand your definition of $\\varphi_{\\delta}$. Do you mean you're taking the supremum over all $h$ such that $0 \\leq h \\leq f$ with the property that $\\operatorname{supp}{h}$ can be covered by finitely many balls of radius $\\delta$?"
] | 0 |
Science
| 0 |
318
|
math
|
Removing deterministic discontinuities from semi-martingales
|
Let $X:=(X_t)_{0 \le t \le T}$ be a solution of the SDE
$$ X_t = X_0 + \int_0^t \sigma(s,X_s) dW_s + \sum_{i=1}^n f_i(X_{t_i^-}) 1_{\{t > t_i\}}$$
where $t_1,\cdots,t_n \in [0,T]$ and $(f_i)_{1 \le i \le n}$ a family of measurable functions. My goal is to remove jumps from $X$ using a change of variable
$$ Y_t = \Phi(t,X_t)$$
where $\Phi$ is yet to be found. When $f_i$'s are affine, it is possible to remove the deterministic term in Itô's lemma by choosing an affine form for $\Phi$ in the space variable and a piecewise constant form in the time variable.
I could not find any result in the litterature regarding the general case, i.e. $f_i$ polynomial, or even discontinuous itself. Is there a general approach for removing deterministic discountinuities from semi-martingales ?
|
https://math.stackexchange.com/questions/414887
|
[
"stochastic-processes",
"stochastic-calculus",
"martingales"
] | 22 | 2013-06-08T11:27:11 |
[
"Please may you specify precisely what you mean by \"remove jumps from $X$\"? Which properties do you want $Y$ to have?",
"I clarified my question. I do not assmue $X$ is a local martingale anymore.",
"I implicitly meant the sum of a local martingale and a pure jump process whose jumps are predictable ($\\mathcal{F}_{t_i^-}$-measurable) and jump dates are constant ($\\mathcal{F}_0$-measurable)."
] | 0 |
Science
| 0 |
319
|
math
|
Are $3^6-6^3$ and $4^8-8^4$ the only sums of four $a^b-b^a,1\lt a\lt b$ numbers?
|
Question
How many numbers of form $a_0^{b_0}-b_0^{a_0}$ are a "nontrivial" sum of four such numbers $a_i^{b_i}-b_i^{a_i}$ ?
The "nontrivial" means: all unordered pairs $\{a_i,b_i\}$ are distinct, $a_i^{b_i}\ne b_i^{a_i}$ and $1 \lt a_i\lt b_i$.
This implies that such summands are positive (are in OEIS A045575), except $2^3-3^2 = -1$.
The only "nontrivial" examples I could find are:
$$\begin{align}
(2^5-5^2) + (2^6-6^2) + (2^7-7^2) + (4^5-5^4) &= (3^6-6^3) \\
(2^8-8^2)+ (4^5-5^4) + (4^6-6^4) + (3^{10}-10^3) &= (4^8-8^4)
\end{align}$$
Are these two the only such numbers?
For comparison, I suspect such sums with less than four summands do not exist, and that there are infinitely many such sums with more than four summands. With four summands exactly, I have only these two examples, hence this question.
Are there any other references (than ones listed in OEIS A045575) on problems related to $x^y-y^x$ numbers?
Background
These two numbers correspond to the following two Base-Exponent Invariant numbers:
$$\begin{array}{}
1464 &=& 2^5 + 2^6 + 2^7 + 4^5 + 6^3 &=& 5^2 + 6^2 + 7^2 + 5^4 + 3^6 \\
68521 &=& 2^8 + 4^5 + 4^6 + 3^{10} + 8^4 &=& 8^2 + 5^4 + 6^4 + 10^3 + 4^8
\end{array}$$
That is, these numbers are a special case of the "Base-Exponent Invariant" numbers.
I call these "Order-$5$ Genus-$1$" Base-Exponent Invariant numbers $1464,68521\in G^{(5)}_1$.
In general, I have found only $14$ examples (see "short examples" in this answer) of "Order-$5$ Base-Exponent Invariant numbers". The largest known example is around $6\cdot 10^6$, while the next one, if it exists, is larger than $10^{16}$.
General near examples
I've searched for smallest "error" $e(n)$ such that "some elements plus the error" are a sum of the "other elements" from the "best" 5-subset of A045575 among "nontrivial" 5-subsets whose largest element is the $n$th nonzero term of A045575.
If $e(n)=0$ (and $n\ge 5$) then we have general examples(s) and $(n,0)$ is colored blue (or green if corresponding example is also "Genus-$1$"). If $e(n)=\pm 1$ we have a "near example" (colored red). Else, we have a black point $(n,\log e(n))$. For $n$ up to $100$, we have the log plot of errors:
Notice that for $n\gt 43$, we have no general examples, and for $n\gt 25$ we have no "Genus-$1$" examples (the examples I'm asking about in this question), so far.
It would seem that new examples are very large and rare or do not exist. However, notice the far right "near example" (red point) at $n=83$, which gives us hope
$$
(2^8-8^2) + (2^{16}-16^2) + (4^{16}-16^4) + (2^{32}-32^2) = (2^{33}-33^2) \color{red}{+1}
$$
that maybe a large example could exist.
Do there exist any larger general examples, Genus-1 examples or near examples?
|
https://math.stackexchange.com/questions/3832522
|
[
"number-theory",
"elementary-number-theory",
"reference-request",
"examples-counterexamples",
"recreational-mathematics"
] | 21 | 2020-09-19T10:29:20 |
[
"@jjagmath It is included as trivial: The \"nontrivial\" condition contains \"$a_i^{b_i}\\ne b_i^{a_i}$\", implying summands that equal zero are not being considered.",
"$2^3-3^2=-1$ is not the only exception. $2^4-4^2 = 0$. That should be also consider as trivial?"
] | 0 |
Science
| 0 |
320
|
math
|
Fibonacci-like sequences mod $p$ where $a_{n+1}$ only really depends on $a_n$.
|
Consider a prime $p$ and a sequence $(a_n)_{n\ge 0}$ in $\mathbb{F}_p$ satisfying $a_{n+2}=a_{n+1}+a_n$ for all $n\ge 0$.
Now, assume that each element of the sequence only really depends on the previous one. That is, assume there exists a function $f:\mathbb{F}_p\to\mathbb{F}_p$ such that $a_{n+1}=f(a_n)$ for all $n\ge 0$.
If $p\not\in\{2,5\}$ and $5$ is a quadratic residue mod $p$ there are the obvious sequences
$$\left(c\left[\frac12+\frac12\sqrt5\right]^n\right)_{n\ge 0}\quad\text{and}\quad\left(c\left[\frac12-\frac12\sqrt5\right]^n\right)_{n\ge 0}$$
for $c\in\mathbb{F}_p$ any constant, but are there any others?
Computational results by @Servaes
Let's call two Fibonacci-like sequences modulo $p$ equivalent if they can be turned into each other through shifting and multiplication by units, then any sequence where each element is a function of the previous and and which is not equivalent to
$$(0)_{n\ge 0},\quad (c^n)_{n\ge 0}$$
where $c^2=c+1$, is called strange.
@Servaes has shown through computation that the first few primes for which strange sequences exist are
$$199, 211, 233, 281, 421, 461, 521, 557, 859, 911.$$
Own work
For any prime $p$ let $\pi(p)$ be the Pisano period mod $p$ (so this is not the prime counting function)
Claim: Let $p$ be a prime, $(a_n)_{n\ge 0}$ a strange sequence. Then it has period $\pi(p)$.
Proof: Let $A=\{a_n:n\ge 0\}$ be the set of attained values and $f:A\to A$ the function which makes this sequence strange, in the sense that $a_{n+1}=f(a_n)$ for all $n\ge 0$. Clearly, $f$ is a bijection.
It is easily proved that, for all $a\in A$ and $n\in\mathbb{Z}$,
$$f^n(a)=F_{n-1}a+F_nf(a).$$
Let $m$ be the period of $(a_n)_{n\ge 0}$, then clearly $m$ is the smallest positive integer $n$ for which $f^n=\operatorname{id}_A$. Thus, for all $a\in A$,
$$(1-F_{m-1})a=F_mf(a)$$
If $F_m\neq 0$ it follows that $f(a)=F_m^{-1}(1-F_{m-1})a$ and the sequence is equivalent to $\left(c^n\right)_{n\ge 0}$ where $c=F_m^{-1}(1-F_{m-1})$ satisfies $c^2=c+1$. This contradicts our assumption that the sequence is strange, so $F_m=0$.
Since the sequence is not the null sequence, we may take $a\in A$ non-zero and conclude that $F_{m-1}=1$. It follows that $\pi(p)\mid m$. Since
$$f^{\pi(p)}(a)=F_{\pi(p)-1}a+F_{\pi(p)}f(a)=F_{-1}a+F_0F(a)=a,$$
the opposite division relation holds as well and we are done.
EDIT: I asked a slightly more general version of this question on Mathoverflow and linked to this question, but forgot to link the Mathoverflow question here.
|
https://math.stackexchange.com/questions/3827693
|
[
"number-theory",
"elementary-number-theory",
"recurrence-relations",
"finite-fields",
"fibonacci-numbers"
] | 21 | 2020-09-15T13:57:17 |
[
"@DonThousand Sure, no problem.",
"@OP Do you mind if I hijack some of this awesome work for a codegolf challenge? Credit will be attributed.",
"Giving a bit of extra attention to this within the Pearl Dive project. Also editing my Pearl Dive \"ad\" in Meta.MathOverflow.",
"Yes, it stands to reason that small Pisano periods are needed for this to happen. Modulo $p=199$ the zeros of $x^2=x+1$ have multiplicative orders $22$ and $11$ (both in $\\Bbb{F}_p^*$ as $\\left(\\dfrac5p\\right)=1$). Modulo $p=233$ they are both of order $52$ in $\\Bbb{F}_{p^2}^*$. Not clear what else is needed to guarantee no repetitions within a period.",
"@Mastrem Not at all, I just ran my code for $p<1000$ initially.",
"This is a really good question and if it doesn't get an answer in the next few days I'd strongly suggest asking it also on Mathoverflow with a link to this question.",
"@Servaes By your earlier comment, did you mean you did not find any such primes beyond $911$?",
"Also, I could not help but notice that these exceptional primes appear quite early on in the list of Fibonacci primitive parts. I'm not sure I can imagine any sort of connection, certainly not at this hour.",
"For $p=211$ I get: $$6,8,9,23,24,26,29,32,33,34,45,48,62,71,76,98,108,109,119,124,127, 128, 132, 133, 135, 137, 139, 146, 152, 156, 157, 158, 162, 166, 168, 169, 173, 177, 179, 181, 187, 195, 199, 208.$$ For $p=233$ I get: $$7,9,12,13,14,15,17,18,19,20,23,24,27,30,31,33,34,36,39,42,43,47,49,50,51,53,55,56,61,62,65,67,69,72,73,76,81,82,84,87,92,96,97,98,101,102,104,106,108,110,113,114,120,121,124,126,128,130,132,133,136,137,138,142,147,150,152,153,158,161,162,165,167,169,172,173,178,179,181,183,184,185,187,191,192,195,198,200,201,203,204,207,210,211,214,215,216,217,219,220,221,222,225,227.$$",
"Quite surprisingly, it seems that the number of values for $a_1$ that yields such a sequence is either $1+\\left(\\tfrac{5}{p}\\right)$, except for the primes I just listed, in which case the number of values for $a_1$ is huge!",
"For $p=199$ I found the starting values $a_0=1$ and $a_1$ is one of: $$6, 8, 9, 10, 11, 12, 14, 15, 18, 20, 21, 23, 24, 25, 26, 27, 29, 32, 34, 38, 39, 40, 42, 43, 44, 46, 48, 53, 54, 55, 57, 60, 61, 62, 63, 64, 67, 72, 77, 80, 81, 82, 84, 86, 87, 88, 89, 93, 96, 102, 103, 104, 105, 106, 108, 109, 110, 111, 114, 115, 117, 118, 122, 124, 125, 127, 128, 129, 130, 131, 135, 136, 137, 138, 139, 140, 141, 142, 144, 147, 148, 149, 150, 152, 153, 155, 156, 159, 162, 163, 164, 165, 167, 169, 171, 172, 173, 177, 178, 179, 180, 181, 182, 183, 184, 185, 187, 189, 190, 193, 195, 196.$$",
"I'm still cleaning up the proof, but I believe that the period of any sequence not of the 'standard' types must be the Pisano period mod p.",
"@Servaes Interesting! Could you perhaps compute the starting two values of such a sequence mod $199$ which does not correspond to a root of $X^2-X-1$?",
"Some simple python code shows that the first primes for which there are not either $0$ or $2$ such sequences, up to multiplication by constants and shifting, are: $$5,\\ 199,\\ 211,\\ 233,\\ 281,\\ 421,\\ 461,\\ 521,\\ 557,\\ 859,\\ 911.$$",
"Correction to my earlier comment; only $2$ sequences for $p=29$, also corresponding to the roots of $X^2-X-1$.",
"@Servaes If it's easier to check whether any sequences exist, I suggest you look only at primes $p$ such that $5$ is not a quadratic residue modulo $p$.",
"@Servaes Note that, mod $11$, the roots of $X^2-X-1$ are $4$ and $8$, so the given sequences are exactly of the type I describe in my question.",
"A computer search yields plenty of examples. For $p=11$: $$(1, 4, 5, 9, 3, 1, 4,\\ldots),\\qquad (1, 8, 9, 6, 4, 10, 3, 2, 5, 7, 1, 8,\\ldots),$$ and $2$ sequences for $p=19$, up to shifts, and $16$ sequences for $p=29$, up to shifts, etc.",
"Also, except the zero sequence, no sequence can contain a $0$. Nor can it have $a_{n+1}=a_n$ for any $n$. Moreover, if $(a_n)_{n\\geq0}$ is such a sequence the so is $(ca_n)_{n\\geq0}$ for any constant $c$. So without loss of generality $a_0=1$. This leaves $p-2$ initial values for $a_1$, for each $p$.",
"For $p=5$ the sequence $(2,1,3,4,2,1,\\ldots)$ works. And of course the zero sequence works for any $p$. There are no other sequences (up to shifting) for $p\\leq5$."
] | 0 |
Science
| 0 |
321
|
math
|
A Polynomial Formed from the Roots of Another Polynomial ad infinitum
|
Let $P(x)$ be a monic polynomial of degree $d$ with complex coefficients. Let $r_1(P),r_2(P),\dots, r_d(P)$ denote the set of roots, ordered so that $|r_1(P)| \leq |r_2(P)|\leq\dots\leq |r_d(P)|$. Define the map $T$ by:
$$(TP)(x)=x^d+r_1(P)x^{d-1}+r_2(P)x^{d-2}+\dots+r_d(P),$$
i.e. $TP$ is the monic polynomial whose coefficients are the roots of $P$.
Let us call a monic polynomial periodic if $T^KP=P$ for some $K>0$.
The question is: for any $d>0$, does there exist a periodic polynomial of degree $d$, other than the trivial solution $x^d$?
Remark on the definition of T
As pointed out in the comments, the definition of $TP$ is ambiguous if there are two roots $r_i(P)$ and $r_j(P)$ such that $|r_i(P)|=|r_j(P)|$ and $r_i(P)\neq r_j(P)$. If the roots of $P$ have this property, then you may break the ties however you please. For example, if $P(x)=x^3-x$, then it is up to you whether to set $r_2(P)=1$ and $r_3(P)=-1$ or $r_2(P)=-1$ and $r_3(P)=1$. However, either ordering still must have $r_1(P)=0$, since there is no ambiguity there.
Note that the set of polynomials that have this ambiguity has measure zero, so I suspect such considerations will not influence the solution of the problem anyway.
Empirical Evidence
If $d=1$ then the answer is clearly yes (any $P(x)=x-a$ will do the job, with $a\ne 0$). If $d=2$ then $P(x)=x^2+x-2$ is a fixed point of $T$, so in particular is periodic with period 1.
I examined other low degrees by numerical simulation. Note that this requires relaxing the definition of a cycle, since testing for exact equality of floating point numbers is impossible. Thus, for these simulations, the condition $T^KP=P$ was replaced with $\|T^KP-P\|_\infty<\varepsilon$, with $\varepsilon=10^{-10}$. In particular, these simulations can only find polynomials $P$ that are periodic up to some fixed error tolerance.
The simulation was done by first initializing the coefficients of $P$ using values drawn from a standard normal distribution, and then iteratively applying $T$ 1000 times and checking whether the obtained sequence was eventually periodic (up to error $<\varepsilon$). Note that this method might not find all cycles.
The periods found thusly for low degrees are:
$$
\begin{array}{rc}
d=3 & \text{possible periods}= 1 ; 11 \\
4 & 21 \\
5 & 4 ; 56 \\
6 & 34 ; 44 \\
7 & 10 ; 15 ; 26 ; 234 \\
8 & 3 ; 38 ; 83 ; 292 \\
9 & 256 ; 311 ; 466 \\
10 & 275 ; 336
\end{array}
$$
Furthermore, for degrees $\leq 8$, all of the simulated sequences eventually became periodic, however this was not true for $d=9$ or $10$ (of course, this does not imply that these sequences never become periodic, just that they did not before the simulation ended).
Crossposted to Mathoverflow:https://mathoverflow.net/questions/364359/a-polynomial-formed-from-the-roots-of-another-polynomial-ad-infinitum
|
https://math.stackexchange.com/questions/3724155
|
[
"sequences-and-series",
"polynomials",
"complex-numbers",
"dynamical-systems"
] | 21 | 2020-06-17T15:38:33 |
[
"The backward way from $TP$ to $P$ by Vieta formulae looks much more simple to calculate and has no ambiguous cases. On the other hand, to assure that the found orbit $T^{-n}P$, $n\\in \\Bbb N\\cup\\{0\\}$ provides an answer, we should check that for each polynomial of the orbit the sequence of absolute values of its coefficients is non-decreasing.",
"@MikeHawk also, id prefer if u resolved the ambiguity of the defn of $T$ by just allowing us to order how we please when trying to find a periodic point, but it's ur question so u can make the rules",
"@MikeHawk u now have excluded the trivial solution $x^d$. fix ur question. also, nice username",
"Your $T$ is not well defined. When there are several different roots of $P$ with the same absolute value there is no definite rule of ordering these within the coefficient sequence of $TP$.",
"@JG, you are correct, I have edited the question",
"Sorry if I’m misreading something, but isn’t $x^d$ trivially periodic?"
] | 0 |
Science
| 0 |
322
|
math
|
Smallest region that can contain all free $n$-ominoes.
|
A nine-cell region is the smallest subset of the plane that can contain all twelve free pentominoes, as illustrated below. (A free polyomino is one that can be rotated and flipped.)
A twelve-cell region is the smallest subset of the plane the can contain all $35$ free hexominoes.
What is the smallest region of the plane that can contain all $108$ free heptominoes (shown below)? All $369$ free octominoes?
Also, is there an existing OEIS sequence for this? If not, I'll add one once there is a bit more data. (The sequence begins $1, 2, 4, 6, 9, 12, \cdots$.)
|
https://math.stackexchange.com/questions/2831675
|
[
"extremal-combinatorics",
"oeis",
"polyomino"
] | 21 | 2018-06-25T10:48:06 |
[
"Some upper bounds$$\\begin{array}{rcl} n & a(n)\\le & \\text{example container}\\\\ \\hline 7 & 17 & \\small\\verb/11-111-11111-1111111/\\\\ 8 & 20 & \\small\\verb/111-111-111111-11111111/\\\\ 9 & 27 & \\small\\verb/111-111-11111-1111111-111111111/\\\\ 10 & 31 & \\small\\verb/111-1111-111111-11111111-1111111111/\\\\ 11 & 38 & \\small\\verb/111-1111-1111-1111111-111111111-11111111111/\\\\ 12 & 43 & \\small\\verb/1111-1111-11111-11111111-1111111111-111111111111/ \\end{array}$$",
"I submitted a related computational challenge over at Programming Puzzles & Code Golf Stack Exchange.",
"@achillehui, my program confirmed that this container works, so indeed $a(8) \\leq 20$.",
"if I didn' make any mistake, $a(8) \\le 20$. example container 00000111-00000111-00111111-11111111",
"For the heptomino case, a search over all shapes containable in a $4 \\times 7$ bbox only return regions of size $17$. I don't know how to extend the emulation to $5 \\times 7$ bbox but this is a strong indication that $a(7) = 17$.",
"You can make the bound $a(n) \\leq n \\cdot \\left\\lceil \\dfrac{n}{2} \\right\\rceil$. It's not very good though.",
"And $a(10) \\leq 32$.",
"If $a(n)$ is the minimum size region covering all $n$-ominoes, then I've constructed examples to show $a(7) \\leq 17$, $a(8) \\leq 22$ and $a(9) \\leq 26$."
] | 0 |
Science
| 0 |
323
|
math
|
Convergence acceleration technique for $\zeta(4)$ (or $\eta(4)$) via creative telescoping?
|
Question
Is it already known whether the $\zeta(4):=\sum_{n=1}^{\infty}1/n^4$ accelerated convergence series $(1)$, proved for instance in [1, Corollaire 5.3], could be obtained by a similar technique to the ones explained by Alf van der Poorten in [2, section 1] for $\zeta(3)$ and $\zeta(2)$?
$$\zeta(4)=\frac{36}{17}\sum_{n=1}^{\infty}\frac{1}{n^{4}\binom{2n}{n}}.\tag{1}$$
(a) In other words, does there exist a pair of functions $F(n,k), G(n,k)$ obeying equation
$$F(n+1,k)-F(n,k)=G(n,k+1)-G(n,k)\tag{$\ast$}$$
from which $(1)$ can be proved? That is, is it possible to transform the defining series for $\zeta(4):=\sum_{n=1}^{\infty}1/n^4$ by means of the Wilf-Zeilberger method (or the Markov-WZ Method) into the faster series $(1)$? (b) Most likely there isn't any such a pair $(F, G)$, but I do not have the means to use these methods on my own.
Short description of section 1 of Alf van der Poorten's paper
The defining series for $\zeta(3):=\sum_{n=1}^{\infty}1/n^3$ and $\zeta(2):=\sum_{n=1}^{\infty}1/n^2$ are accelerated resulting in
\begin{equation*}
\zeta (2)=3\sum_{n=1}^{\infty }
\frac{1}{n^{2}\binom{2n}{n}},\tag{2}
\end{equation*}
\begin{equation*}
\zeta (3)=\frac{5}{2}\sum_{n=1}^{\infty }
\frac{(-1)^{n-1}}{n^{3}\binom{2n}{n}}\tag{3}.
\end{equation*}
For instance, $(3)$ follows from the identity
\begin{equation*}
\sum_{n=1}^{N}\frac{1}{n^{3}}-2\sum_{n=1}^{N}\frac{\left( -1\right) ^{n-1}}{n^{3}\binom{2n}{n}}=\sum_{k=1}^{N}\frac{(-1)^{k}}{2k^{3}\binom{N+k}{k}\binom{N}{k}}-\sum_{k=1}^{N}\frac{(-1)^{k}}{2k^{3}\binom{2k}{k}}\tag{4},
\end{equation*}
by letting $N\rightarrow \infty $ and noticing that
\begin{equation*}
\lim_{N\to\infty}\sum_{k=1}^{\infty}\frac{(-1)^{k}}{2k^{3}\binom{N+k}{k}\binom{N}{k}}=0.
\end{equation*}
Equality $(4)$ can be explained as follows:
Write
\begin{equation*}
X_{n,k}=\frac{(-1)^{k-1}}{k^{2}\binom{n+k}{k}\binom{n-1}{k}},\qquad D_{n,k}=\frac{(-1)^{k}}{n^{2}\binom{n+k}{k}\binom{n-1}{k}}\qquad k<n.
\end{equation*}
Notice that $$X_{n,k}=D_{n,k-1}-D_{n,k}.\tag{5}$$ Hence
\begin{eqnarray*}
\sum_{k=1}^{n-1}\frac{X_{n,k}}{n} &=&\sum_{k=1}^{n-1}\left( \frac{D_{n,k-1}}{
n}-\frac{D_{n,k}}{n}\right) =\frac{D_{n,0}}{n}-\frac{D_{n,n-1}}{n} \\
&=&\frac{1}{n^{3}}-2\frac{\left( -1\right) ^{n-1}}{n^{3}\binom{2n}{n}},\qquad\frac{D_{n,0}}{n} =\frac{1}{n^{3}},\quad \frac{D_{n,n-1}}{n}=2\frac{
\left( -1\right) ^{n-1}}{n^{3}\binom{2n}{n}}
\end{eqnarray*}
Sum over $k$, $1\leq
k\leq n-1$
\begin{equation*}
\sum_{k=1}^{n-1}X_{n,k}=\sum_{k=1}^{n-1}\left( D_{n,k-1}-D_{n,k}\right)
=D_{n,0}-D_{n,n-1}.
\end{equation*}
Now, summing over $n$, $1\leq n\leq N$, and noticing that
\begin{equation*}
\frac{X_{n,k}}{n}=E_{n,k}-E_{n-1,k},\qquad E_{n,k}=\frac{(-1)^{k}}{2k^{3}\binom{n+k}{k}\binom{n}{k}},\tag{6}
\end{equation*}
we obtain
\begin{equation*}
\sum_{k=1}^{N-1}\sum_{n=k+1}^{N}\frac{X_{n,k}}{n}=\sum_{k=1}^{N-1}
\sum_{n=k+1}^{N}\left( E_{n,k}-E_{n-1,k}\right) =\sum_{k=1}^{N}\left(
E_{N,k}-E_{k,k}\right).
\end{equation*}
So, on the one hand
\begin{eqnarray*}
\sum_{n=1}^{N}\sum_{k=1}^{n-1}\frac{X_{n,k}}{n} &=&\sum_{n=1}^{N}\frac{1}{
n^{3}}-2\sum_{n=1}^{N}\frac{\left( -1\right) ^{n-1}}{n^{3}\binom{2n}{n}},\tag{7}
\end{eqnarray*}
and on the other hand
\begin{eqnarray*}
\sum_{n=1}^{N}\sum_{k=1}^{n-1}\frac{X_{n,k}}{n}
&=&\sum_{k=1}^{N}E_{N,k}-\sum_{k=1}^{N}E_{k,k} \\
&=&\sum_{k=1}^{N}\frac{(-1)^{k}}{2k^{3}\binom{N+k}{k}\binom{N}{k}}
-\sum_{k=1}^{N}\frac{(-1)^{k}}{2k^{3}\binom{2k}{k}}.\tag{8}
\end{eqnarray*}
The identity $(4)$ follows.
Remarks
The combination of equations $(5)$ and $(6)$ forms an identity of the form $(\ast)$, which is equation $(6.1.2)$ of [3, chapter 6] (Zeilberger's Algorithm).
As for $(2)$, [2, section 1] actually explains how to accelerate $\eta(2):=\sum_{n=1}^{\infty }(-1)^{n-1}/n^{2}$ and obtain $(2)$, using the
relation $\eta(s) = \left(1-2^{1-s}\right) \zeta(s)$. As such, if feasible, I expect that accelerating $\eta(4)$ might be easier than $\zeta(4)$.
References
Henri Cohen, Généralisation d'une Construction de R. Apéry
Alfred van der Poorten, Some wonderful formulae... Footnotes to Apery's proof of the irrationality of $\zeta(3)$
Marko Petkovsek, Herbert Wilf, Doron Zeilberger, A = B
|
https://math.stackexchange.com/questions/2281000
|
[
"sequences-and-series",
"number-theory",
"summation",
"experimental-mathematics",
"convergence-acceleration"
] | 21 | 2017-05-14T12:36:23 |
[
"See also this fast converging series en.wikipedia.org/wiki/…",
"@Masacroso In short, can (1) be derived from an identity similar to (4)?",
"what is the question?"
] | 0 |
Science
| 0 |
324
|
math
|
Surprising continued fractions of numbers in the form $\sum_{n=0}^\infty \frac{1}{a^{2^n}}$, including the same pattern for every $a>2$
|
I've been interested in the numbers of this form because it can be proved that for integer $a \geq 2$ all of them are irrational: $$x_a=\sum_{n=0}^\infty \frac{1}{a^{2^n}}$$
They satisfy the conditions listed in this paper: The Approximation of Numbers as Sums of Reciprocals. This is related to my other question.
Now I decided to look at simple continued fractions of such numbers and noticed a surprising thing. For each $a$ I checked the CF entries consist of only three numbers (except the first entry, which is why I show $1/x$ instead of $x$):
$$x_2=\sum_{n=0}^\infty \frac{1}{2^{2^n}}=0.8164215090218931437080797375305252217$$
$$\frac{1}{x_2}=1+\cfrac{1}{4+\cfrac{1}{2+\cfrac{1}{4+\cfrac{1}{4+\cfrac{1}{6+\dots}}}}}$$
Writing the CF in the more convenient form we obtain for $200$ digits:
$1/x_2=$[1; 4, 2, 4, 4, 6, 4, 2, 4, 6, 2, 4, 6, 4, 4, 2, 4, 6, 2, 4, 4, 6, 4, 2, 6, 4, 2, 4, 6, 4, 4, 2, 4, 6, 2, 4, 4, 6, 4, 2, 4, 6, 2, 4, 6, 4, 4, 2, 6, 4, 2, 4, 4, 6, 4, 2, 6, 4, 2, 4, 6, 4, 4, 2, 4, 6, 2, 4, 4, 6, 4, 2, 4, 6, 2, 4, 6, 4, 4, 2, 4, 6, 2, 4, 4, 6, 4, 2, 6, 4, 2, 4, 6, 4, 4, 2, 6, 4, 2, 4, 4, 6, 4, 2, 4, 6, 2, 4, 6, 4, 4, 2, 6, 4, 2, 4, 4, 6, 4, 2, 6, 4, 2, 4, 6, 4, 4, 2, 4, 6, 2, 4, 4, 6, 4, 2, 4, 6, 2, 4, 6, 4, 4, 2, 4, 6, 2, 4, 4, 6, 4, 2, 6, 4, 2, 4, 6, 4, 4, 2, 4,...]
Clearly, all of the entries are $2,4$ or $6$.
The same goes for other examples:
$1/x_3=$[2; 5, 3, 3, 1, 3, 5, 3, 1, 5, 3, 1, 3, 3, 5, 3, 1, 5, 3, 3, 1, 3, 5, 1, 3, 5, 3, 1, 3, 3, 5, 3, 1, 5, 3, 3, 1, 3, 5, 3, 1, 5, 3, 1, 3, 3, 5, 1, 3, 5, 3, 3, 1, 3, 5, 1, 3, 5, 3, 1, 3, 3, 5, 3, 1, 5, 3, 3, 1, 3, 5, 3, 1, 5, 3, 1, 3, 3, 5, 3, 1, 5, 3, 3, 1, 3, 5, 1, 3, 5, 3, 1, 3, 3, 5, 1, 3, 5, 3, 3, 1, 3, 5, 3, 1, 5, 3, 1, 3, 3, 5, 1, 3, 5, 3, 3, 1, 3, 5, 1, 3, 5, 3, 1, 3, 3, 5, 3, 1, 5, 3, 3, 1, 3, 5, 3, 1, 5, 3, 1, 3, 3, 5, 3, 1, 5, 3, 3, 1, 3, 5, 1, 3, 5, 3, 1, 3, 3, 5, ,...]
$1/x_5=$[4; 7, 5, 5, 3, 5, 7, 5, 3, 7, 5, 3, 5, 5, 7, 5, 3, 7, 5, 5, 3, 5, 7, 3, 5, 7, 5, 3, 5, 5, 7, 5, 3, 7, 5, 5, 3, 5, 7, 5, 3, 7, 5, 3, 5, 5, 7, 3, 5, 7, 5, 5, 3, 5, 7, 3, 5, 7, 5, 3, 5, 5, 7, 5, 3, 7, 5, 5, 3, 5, 7, 5, 3, 7, 5, 3, 5, 5, 7, 5, 3, 7, 5, 5, 3, 5, 7, 3, 5, 7, 5, 3, 5, 5, 7, 3, 5, 7, 5, 5, 3, 5, 7, 5, 3, 7, 5, 3, 5, 5, 7, 3, 5, 7, 5, 5, 3, 5, 7, 3, 5, 7, 5, 3, 5, 5, 7, 5, 3, 7, 5, 5, 3, 5, 7, 5, 3, 7, 5,...]
How can this phenomenon be explained?
The further questions are:
Can we prove that all the CF entries for these numbers belong to a fixed set of three integers?
If so, can we make any conclusions about transcendentality of these numbers?
The implications are interesting. It is conjectured that algebraic numbers of degree $>2$ should have arbitrarily large CF entries at some point. See this paper. Meanwhile we know, that for degree $2$ the CF is (eventually) periodic.
Notice also the same 'pattern' which goes for the examples with odd $a$ here. We have a list of CF entries going the same way.
If we subtract the list of entries for $a=3$ from the list of entries for $a=5$, we obtain:
$$L_5-L_3=[0;2,2,2,2,2,2,2,2,2,...]$$
$$L_7-L_3=[0;4,4,4,4,4,4,4,4,4,...]$$
$$L_{113}-L_3=[0;110,110,110,110,110,110,...]$$
For $6$ and $2$ it goes the same way, but not for $4$ and $2$, there is some 'scrambling' there.
How can we prove/explain this facts? The same pattern for different $a$ seems very strange to me, especially if the numbers are transcendental.
Basically, if this turns out to be true, then from the CF for $x_3$ we will immediately obtain all the CFs for every $x_{2n+1}$
Important update. See http://oeis.org/A004200 for the case $a=3$, it seems like these continued fractions have pattern. And the following paper is linked: Simple continued fractions for some irrational numbers
The pattern is the same for every $a$ except $2$, so not only for the odd $a$.
Morevoer, look at the continued fractions for:
$$y_{ap}=a^p x_a$$
For integer $p$ you will notice a very apparent pattern.
My questions are largely answered by the linked paper. I will try to write up a short summary and post it as an answer, but anyone is three to do it before me.
Turns out a very close question was asked before Continued fraction for $c= \sum_{k=0}^\infty \frac 1{2^{2^k}} $ - is there a systematic expression?
|
https://math.stackexchange.com/questions/1906832
|
[
"elementary-number-theory",
"irrational-numbers",
"continued-fractions",
"egyptian-fractions"
] | 21 | 2016-08-28T15:51:52 |
[
"@RobertIsrael, I have corrected my question, thank you",
"@RobArthan, yes, thank you. I get it now",
"@YuriyS: thanks for the link. I think you misunderstood the conjecture (which is far from trivial). the conjecture was that the simple continued fractions for algebraic numbers is either periodic or contains arbitrarily large coefficients. Read the abstract of the paper in your link (and then the rest of the paper) for clarification.",
"What is conjectured is that the continued fraction of any real algebraic number of degree $> 2$ has arbitrarily large entries.",
"@YuriyS: I am intrigued by the conjecture. Can you give a pointer to the paper you found about this conjecture, please.",
"Two simple CFs are equal iff their sequences of coefficients are equal. If the set of allowed coefficients includes two numbers $c_0$ and $c_1$, then you get a distinct CF for every real number $\\alpha \\in (0, 1)$, by mapping the binary representation of $\\alpha$, a sequence of $0$s and $1$s, to the corresponding sequence of $c_0$s and $c_1$s.",
"@YuriyS: There is a bijection between them and the interval $[0, 1]$ expressed in base $b$, where $b$ is the number of integers in the set, right?",
"@RobArthan, how do you prove there are uncountably many such CFs?",
"There are uncountably many simple continued fractions with coefficients drawn from any given finite set of $2$ or more positive integers. So I don't think the conjecture you have in mind in your last two paragraphs can be correct without some further qualification."
] | 0 |
Science
| 0 |
325
|
math
|
Which digit occurs most often?
|
Is there any method to calculate, which digit occurs most often in the number
$$4 \uparrow \uparrow \uparrow \uparrow 4\ ,$$
the fourth Ackermann-number ?
Or would it be necessary to calculate the number digit by digit ?
I only know that the last n digits can be calculated, but this does not help much.
|
https://math.stackexchange.com/questions/653940
|
[
"elementary-number-theory",
"decimal-expansion",
"tetration",
"hyperoperation"
] | 21 | 2014-01-27T16:29:40 |
[
"According to Wolfram Alpha, the number's too large to even represent, so that's saying something. It wouldn't be practical to calculate the 4th Ackermann number much less count its digits, so I can see why the urgency to have a simpler method is here.",
"I did not offer this bounty. And this is a very old question, I just wondered whether there is a trick that allows us to answer this question, which is apparently not the case. And I would find it very interesting, if it would be possible.",
"Even if we know the answer to this question, this is a dead end, unstructural information that needs a lot of work to obtain. (Despite of the bounty, please provide context and motivation, e.g. show the own effort to solve the issue. Why this function invented by humans for other reasons, why the digits in the very peculiar base ten used by humans for other reasons, and why $4\\uparrow^44$?) As a parallel, let us consider some smaller \"similar\" number, $2\\uparrow\\uparrow 4=65536$, which is the profit to know a part of the statistics of the occurence of the digits?",
"In general, we know nothing about the digits of large powers, other than their last ones.",
"been any progress ?"
] | 0 |
Science
| 0 |
326
|
math
|
Do Hopf bundles give all relations between these "composition factors"?
|
Write a fiber bundle $F\to E\to B$ in short as $E=B\ltimes F$ (in analogy with groups).
(This is not necessary, but: given another bundle $X\to B\to Y$, we can write $E=(Y\ltimes X)\ltimes F$, but we may also compose $E\to B$ with $B\to Y$ to get fibrations $E\to Y$, whose fibers $Z$ fit inside fibrations $F\to Z\to X$ hence $Z=X\ltimes F$ and $E=Y\ltimes(X\ltimes F)$ as well. Thus we have a kind of associativity property $(Y\ltimes X)\ltimes F= Y\ltimes(X\ltimes F)$.)
I noticed something about sequences involving $\mathrm{Spin}(7)$.
First of all, unit imaginary octonions are square roots of negative one, so $L:\mathrm{Im}(\mathbb{O})\to\mathrm{End}_{\mathbb{R}}(\mathbb{O})$ which sends $u$ to left-multiplication-by-$u$ is "Clifford" and so extends to a representation of the Clifford algebra $\mathrm{Cliff}(\mathrm{Im}(\mathbb{O}))\to\mathrm{End}_{\mathbb{R}}(\mathbb{O})$, which restricts to a map $\mathrm{Spin}(7)\to\mathrm{SO}(8)$.
The point-stabilizer of any point in $S^7\subset\mathbb{O}\cong\mathbb{R}^8$ is $G_2=\mathrm{Aut}(\mathbb{O})$, and the point-stabilizer of any point in $S^6\subset\mathrm{Im}(\mathbb{O})$ is $\mathrm{SU}(3)$. Thus, we have some bundles
$$ \begin{array}{ccccc} G_2 & \to & \mathrm{Spin}(7) & \to & S^7 \\ \mathrm{SU}(3) & \to & G_2 & \to & S^6 \\ \mathrm{SU}(2) & \to & \mathrm{SU}(3) & \to & S^5 \end{array}$$
which combined with $\mathrm{SU}(2)\simeq S^3$ gives
$$ \mathrm{Spin}(7)=S^7\ltimes (S^6\ltimes (S^5\ltimes S^3)), $$
and when combined with the Hopf bundles $S^7=S^4\ltimes S^3$ and $ S^3=S^2\ltimes S^1$ becomes
$$ \begin{array}{lcl} \mathrm{Spin}(7) & = & (S^4\ltimes (S^2\ltimes S^1))\ltimes (S^6\ltimes(S^5\ltimes S^3)) \\ & \mathrm{or} & (S^4\ltimes S^3)\ltimes(S^6\ltimes(S^5\ltimes(S^2\ltimes S^1))). \end{array} $$
On the other hand, we have $\mathrm{Spin}(n)=S^{n-1}\ltimes\mathrm{Spin}(n-1)$ (where $\mathrm{Spin}(n)\to\mathrm{SO}(n)$ gives an action on $S^{n-1}\subset\mathbb{R}^n$ and we invoke orbit-stabilizer again), which means
$$ \mathrm{Spin}(7)=S^6\ltimes(S^5\ltimes(S^4\ltimes (S^3\ltimes (S^2\ltimes S^1)))). $$
So it looks like given any two sets of "composition factors" for $\mathrm{Spin}(7)$, they can made equivalent using the Hopf bundles. Is this a general principle or is there an obvious counterexample? That is, if something can be written as $S^{n_1}\ltimes\cdots\ltimes S^{n_k}$ in two different ways, can the (multisets of) composition factors be equated after using the Hopf bundles to replace $S^7$ with $S^4$ and $S^3$ etc.?
(Hopefully this is not too vague and sloppy so as to be cryptic, and not trivial.)
|
https://math.stackexchange.com/questions/2200929
|
[
"homotopy-theory",
"fiber-bundles",
"division-algebras",
"spin-geometry",
"hopf-fibration"
] | 21 | 2017-03-24T00:07:25 |
[] | 0 |
Science
| 0 |
327
|
math
|
Gauss-Lucas Theorem (roots of derivatives)
|
Gauss-Lucas Theorem states:
"Let f be a polynomial and $f'$ the derivative of $f$. Then the theorem states that the $n-1$ roots of $f'$ all lie within the convex hull of the $n$ roots $\alpha_1,\ldots,\alpha_n$ of $f$."
My Question is:
Is there a theorem which states that there exists a permutation $\sigma \in S_n$ that the inner area of the polygon which edges go through the roots of $f$
$$\alpha_{\sigma(1)}\longrightarrow\alpha_{\sigma(2)}\longrightarrow\ldots\longrightarrow\alpha_{\sigma(n)}\longrightarrow\alpha_{\sigma(1)}$$ contains all roots of $f'$?
EDIT (OB) It is not completely clear from the original question wether the OP allowed for self intersections of the polygonal curve with vertices the roots of $f$. The question that has a bounty on its head asks for a polygonal Jordan curve with vertices the roots of $f$ containing the roots of $f'$ further assuming $f$ has simple roots. Roots of $f'$ are allowed to lie on the edges of the polygonal Jordan curve. We further assume $n\geq 3$ and that the roots of $f$ are not all aligned (i.e. not all contained in a real affine line.)
|
https://math.stackexchange.com/questions/164670
|
[
"calculus",
"geometry",
"polynomials",
"convex-analysis"
] | 21 | 2012-06-29T11:30:59 |
[
"I found an interesting paper: arxiv.org/pdf/1405.0689",
"@HansEngler Indeed! The same happens if you consider $(X^2+1)X(X^2-1)(X^2-4)$ for instance.",
"It certainly can happen that some roots of $f'$ must lie on edges of any such polygonal Jordan curve. For example, take $f(z) = z^4 - z$ with roots $0$ and $a_i$, where the $a_i$ are roots of unity. Then the roots of $f'$ are $2^{-2/3}a_i$.",
"@Maesumi the OP certainly meant to write \"whose edges\". I might consider rewriting the question and including an example, the only problem is that I don't know how to draw plots. What the OP means by inner area I can explain in a few words: you have a closed polygonal path (in my question it is a Jordan path, so no self intersections) that links all roots of $f$, and there is a compact \"inner area\" and a non compact \"outer area\", the same way when you draw a circle, it defines a compact disk $\\lbrace |z|\\leq 1\\rbrace$ and the non compact outer area $\\lbrace |z|>1\\rbrace$.",
"I think it might be better to re-write the question from scratch and add an example or further explanation. what is \"inner area\" and what do you mean by \"which edges\"?",
"@LeonidKovalev you are right. I'll edit my edit ^^ thanks for your help.",
"@OlivierBégassat If the roots all lie on the same line, we can't have a polygonal Jordan curve through them. Maybe you wanted to say that the roots are in generic position: no three on the same line. And $n\\ge 3$.",
"I started a bounty on this question. To get the reward, either give a counter-example, or produce a proof! Assume all roots to be distinct (this is not very restrictive, at least when it comes to producing a counter example, since we can modify the constant term of $f$ slightly, so that all its roots are now distinct, without affecting the position of its roots too much, and obviously not the roots of its derivative), and the polygon should be a polygonal Jordan curve.",
"This is a very interesting question.",
"Is the permutation required to be such that the edges do not cross? Otherwise it's hard to interpret the inner area. In any case, you have to allow the roots on the boundary of the polygon, as the case $n=2$ shows."
] | 0 |
Science
| 0 |
328
|
math
|
Number as the sum of digits of some degree
|
We will say that the measure of a number is equal to the maximum degree in which it is possible to represent a number in the form of a sum of digits copied (You can not rearrange the numbers). For example, for $55$ this will be $5$, because $$ 55^1 = 55, \quad 55 = 55$$ $$ 55^2 = 3025, \quad 30+25 = 55 $$ $$ 55^3 = 166375, \quad 1+6+6+37+5 = 55$$ $$ 55^4 = 9150625, \quad 9+15+0+6+25 = 55 $$$$55 ^ 5 = 503284375, \quad 5 + 0 + 3 + 28 + 4 + 3 + 7 + 5 = 55.$$
Let $a_n$ be a sequence of numbers such that all smaller ones have a measure less than. What is the asymptotics of this sequence? Is it possible to somehow build numbers with a given measure? If not, what measures can not be built?
The task was put in a Russian forum, I put it a little different question, I will be glad to any help in its solution :)
|
https://math.stackexchange.com/questions/2812010
|
[
"sequences-and-series",
"number-theory",
"elementary-number-theory",
"asymptotics"
] | 20 | 2018-06-07T18:18:43 |
[
"We shouldn't have an immediate obstacle $\\mod 9$, so interesting numbers are $0$ or $1$ modulo $9$. Assuming that, one would expect that the sum of digits of $n^k$ is about $\\frac 9{2\\log 10} k\\log n$, so the power may be as large as $\\frac {2n\\log 10}{9\\log n}$ and, if you are lucky, you can go up from here for numbers divisible by $10$ because the growth rate of the sum of the digits for them is slower. For the numbers in the table that do not end with $0$, this is an almost exact match. However, the third (and last) term in your sequence is formally $10$, whose measure is $+\\infty$.",
"All previous must have",
"For Maximum, do you mean that all of the previous degrees have to have the same property? Or just the biggest degree with that property?",
"Yes, it is obvious that the problem for numbers ending with zero is uninteresting.",
"What is most interesting is that in the examples above the measure value coincides with the exact upper bound of the measure.",
"The Russian forum also presents some results, it seems to me amazing such large numbers on such relatively small results. $ n \\quad b(n) \\\\ 675 \\quad 50 \\\\ 945 \\quad 68 \\\\ 964 \\quad 71 \\\\ 990 \\quad 107 \\\\ 991 \\quad 71 \\\\ 1296 \\quad 84 \\\\ 1702 \\quad 114 \\\\ 2728 \\quad 173 \\\\ 4879 \\quad 285 \\\\ 5050 \\quad 403 \\\\ 5149 \\quad 300 \\\\ 5292 \\quad 309 \\\\ $",
"I ask the answer to one of the questions. Do you think it's bad when there are so many of them?",
"What exactly are you asking?",
"All natural large ones are considered, the first term in the sequence is obviously $2$ with measure $1$, and the second term is 9, since $$ 9 ^ 2 = 81, \\quad 8 + 1 = 9 $$"
] | 0 |
Science
| 0 |
329
|
math
|
Does every finitely generated group have finitely many retracts up to isomorphism?
|
The infinite dihedral group $D_\infty = \langle a,b \mid a^2 = b^2 = \text{Id}\rangle $ is a finitely generated group with infinitely many cyclic subgroups of order 2, every one of which is a retract.
For the group $\mathbb{Z}\oplus\mathbb{Z}$, take $H_n=\langle (1,n)\rangle$ for any integer $n$ (with $K=\langle (0,1)\rangle$). Then we have $\mathbb{Z}\oplus \mathbb{Z}=H_n \oplus K$ which shows that $\mathbb{Z}\oplus\mathbb{Z}$ (hence every finitely generated abelian group) has infinitely many different retracts.
Every free group $F_n$ of finite rank $n$ has infinitely many retracts. In fact, each free factor of $F_n$ is a retract and there are infinitely many free factors.
These are examples of finitely generated groups with infinitely many retracts. If we look at them, we'll find that they have finitely many retracts up to isomorphism. My question is that does every finitely generated group have finitely many retracts up to isomorphism?
|
https://math.stackexchange.com/questions/4528730
|
[
"group-theory",
"finitely-generated",
"retraction"
] | 20 | 2022-09-10T08:32:53 |
[
"@CheerfulParsnip You are right. I'll add free groups of finite rank to my list. Thanks very much for the comment.",
"You can add finitely generated free groups to your list of examples. Any retract of $F_n$ has rank $\\leq n$. Maybe look for a counterexample in groups like Grigorchuk's Group."
] | 0 |
Science
| 0 |
330
|
math
|
A difficult integral for the Chern number
|
The integral
$$
I(m)=\frac{1}{4\pi}\int_{-\pi}^{\pi}\mathrm{d}x\int_{-\pi}^\pi\mathrm{d}y \frac{m\cos(x)\cos(y)-\cos x-\cos y}{\left( \sin^2x+\sin^2y +(m-\cos x-\cos y)^2\right)^{3/2}}
$$
gives the Chern number of a certain vector bundle [1] over a torus. It can be shown using the theory of characteristic classes that
$$
I(m) = \frac{\mathrm{sign}(m-2)+\mathrm{sign}(m+2)}{2}-\mathrm{sign}(m) = \begin{cases}1 & -2< m < 0 \\ -1 & 0 < m < 2 \\0 & \text{otherwise}\end{cases}.
$$
Is there any way to evaluate this integral directly (i.e. without making use of methods from differential geometry) to obtain the above result?
I should mention that the above integral can be written as ($1/4\pi$ times) the solid angle subtended from the origin of the unit vector $\hat{\mathbf{n}}$,
$$
I(m)=\frac{1}{4\pi}\int_{-\pi}^{\pi}\mathrm{d}x\int_{-\pi}^\pi\mathrm{d}y\, \hat{\mathbf{n}}\cdot\left(\partial_x \hat{\mathbf{n}}\times \partial_y \hat{\mathbf{n}}\right),
$$
where $\mathbf{n}(m)=(\sin x, \sin y, m- \cos x-\cos y)$. While this form makes it very straightforward to evaluate $I(m)$, I am interested in whether there is a way to compute this integral using more standard techniques.
[1] B. Bernevig Topological Insulators and Topological Superconductors Chapter 8
|
https://math.stackexchange.com/questions/4495174
|
[
"integration",
"multivariable-calculus",
"differential-geometry",
"definite-integrals",
"characteristic-classes"
] | 20 | 2022-07-18T03:39:30 |
[
"@TheSimpliFire done :) mathoverflow.net/questions/453343/…",
"@xzd209 I would suggest you crosspost this question on MathOverflow, ensuring the post includes a link to this MathStackExchange post.",
"@SetnessRamesory That case can probably be done with elliptic integrals but I still believe there must be an easier way to prove $I(m)=0\\quad\\forall|m|>2$, for instance. Due to the symmetry of the integrand and the exponent of $3/2$, an application of Green's theorem may be useful, similar to this approach.",
"@TheSimpliFire do you know$$\\int_{0}^{\\pi} \\int_{0}^{\\pi} \\frac{\\text{d}x\\text{d}y}{\\sqrt{\\sin(x)^2+\\sin(y)^2+\\left ( 1-\\cos(x)-\\cos(y) \\right )^2 } }=\\frac{\\Gamma\\left ( \\frac14 \\right )^4 }{8\\pi}?$$",
"The case $m=0$ is easy because we can rewrite as $-\\pi I(0)=\\int_0^\\pi f(x)\\,dx$ where $$f(x)=\\int_0^\\pi\\frac{\\cos x+\\cos y}{(2+2\\cos x\\cos y)^{3/2}}\\,dy$$ and we find that $f(x)=-f(\\pi-x)$ for $x\\in[0,\\pi]$ after substituting $u=\\pi-y$. So $I(0)=0$.",
"It appears there are two cases missing: $I(\\pm2)=\\mp1/2$."
] | 0 |
Science
| 0 |
331
|
math
|
Conjecture: No positive integer can be written as $a^b+b^a$ in more than one way
|
Today, I came up with the following problem when trying to solve this.
Are there distinct integers $a,b,m,n>1$ such that the equation $$a^b+b^a=m^n+n^m$$ holds? That is, is there ever an integer that can be written as $a^b+b^a$ in more than one way?
I claim that the answer is No, but I think solving this is beyond my knowledge. For a very preliminary observation, the simplest case is to consider the powers of $1,5,6,0$, since they end in those same digits. For example, $$\begin{cases}a\equiv5\pmod{10}\\b\equiv6\pmod{10}\end{cases}\implies a^b+b^a\equiv1\pmod{10}.$$ However, this brings about an issue, since there is hardly any indication as to what values $x$ and $y$ can take other than them having opposite parity.
PARI/GP code is
intfun(a,b,m,n)={for(i=2,a,for(j=2,b,for(k=2,m,for(l=2,n,if(i<>k && i<>l && j<>k && j<>l && i^j+j^i-k^l-l^k==0,print(i," ",j," ",k," ",l))))));}
No solutions have been found up to $a,b,m,n\le100$.
|
https://math.stackexchange.com/questions/3286093
|
[
"number-theory",
"modular-arithmetic",
"diophantine-equations",
"conjectures"
] | 20 | 2019-07-07T12:39:27 |
[
"@ZachHunter I ran a code that checks for collisions as you suggested, up to $a, b \\le 10000$, and I also ran it again for $a<70000$ and $b<500$. Did someone calculate asymptotics for this question? Can we say something along the lines \"the function $f$ gets so big so fact that intuitively it's very unlikely that we'll find such number considering the bruteforces we ran for small numbers\"?",
"Code could use updates: like parfor, or using the fact that if $a,b$ are same parity, $m,n$ are going to need to be same parity as well. Or that The code as written checks $2^5+5^2$ and $5^2+2^5$ you're doing more than $4$ times as many checks as needed.",
"Maybe this could help: $a^b+b^a=b^{\\frac{b\\ln(a)}{\\ln(b)}}+b^a=b^a(b^M+1)$ where $M=\\ln\\left(\\dfrac{\\sqrt[\\ln(b)]{b\\ln(a)}}{e^a}\\right)$",
"you can do a much broader search if you simply check if $f(a,b) = a^b+b^a$ is an injection for $(a,b) \\in \\mathbb{Z}^2, a \\geq b \\geq 2$. I was able to confirm this up for $a,b \\leq 1300$ here repl.it/repls/OutstandingEarlyAutomaticvectorization. You would easily be able to search through higher values if you ran this locally on your computer instead of through repl, but I don't have access to that at the moment.",
"from my reconfiguring it, we get if any pair of them share a factor n, we can get a difference of nth powers on one side.",
"If $a | (b-1)$ and $a |(x-1)$ and $ a |( y-1)$ then there are no solutions (obviously).",
"$x^y-a^b$ rather.",
"$a^b-x^y=b^a-y^x$ etc mean they either pump out factors on both sides, or are all coprime.",
"Fermat's little theorem will help.",
"Still no solution in the range $a,b,x,y\\le 200$",
"We can accelerate the search by assuming $a\\le b, x\\le y, a\\le x$"
] | 0 |
Science
| 0 |
332
|
math
|
If two convex polygons tile the plane, how many sides can one of them have?
|
The set of convex polygons which tile the plane is, as of $2017$, known: it consists of all triangles, all quadrilaterals, $15$ families of pentagons, and three families of hexagons. Euler's formula rules out strictly convex $n$-gons with $n\ge 7$. (The pentagonal case is by far the most difficult one.)
I am interested in pairs of convex polygons that can collectively tile the plane. Specifically, I am curious how many sides can be in a polygon which is part of such a tiling.
Here are some conditions to impose on such a tiling, from weakest to strongest:
There is at least one copy of each tile. (Without this condition, one can trivially take a pair consisting of a tiling polygon and any other convex polygon, and just never use the latter shape.)
There are at least $k$ copies of each tile.
There are infinitely many of each tile.
Every tile borders a tile of the other type.
The tiling is $2$-isohedral, i.e., every tile can be carried to any other tile of the same shape by a symmetry of the tiling.
Each of these conditions implies those above it.
In the weakest case, the number of sides is unbounded, as exhibited by the following example:
(The tiling is constructed by decomposing "wedges" of central angle $2\pi/N$ into congruent isosceles triangles, and then combining the central triangles to yield an $N$-gon in the center.)
Requiring at least $k$ of each tile still yields arbitrarily high numbers of sides, by taking the above construction for $N=M\cdot k$ and subdividing the $N$-gon into $k$ "wedges" which are $(M+2)$-gonal.
On the other end of the spectrum, I have found a $2$-isohedral tiling using regular $18$-gons, shown below:
After consulting this paper, it seems that the tiling pictured above is of type $4_2 18_{12}-1\text{a}\ \text{MN}\ \text{p}6\text{m}$ in their classification scheme (shown at the bottom of page 109); there are no $2$-isohedral tilings which allow for any higher number of contacts between different shapes, although type $3_1 18_{12}-1\text{a}\ \text{MN}\ \text{p}6\text{m}$ also works (and can be obtained from the above construction by cutting each kite-shaped tile in two). Thus, it is maximal among $2$-isohedral tilings.
What are the maximal tilings under weaker conditions? The maximal number of sides under each successively stronger restriction is a weakly decreasing sequence which goes $\infty, \infty, ?, ?, 18$. So far, I have no bounds on the missing two terms except that they are each at least $18$.
Some notes on this problem:
It is not necessarily the case that one of the tiles may tile the plane on its own; see this math.SE question for a counterexample.
If convexity is relaxed for either piece, the number of sides is unbounded even in the $2$-isohedral case (in fact, both pieces can simultaneously have arbitrarily many sides).
Edit: Crossposted to Math Overflow here.
|
https://math.stackexchange.com/questions/3984049
|
[
"geometry",
"plane-geometry",
"polygons",
"convex-geometry",
"tiling"
] | 20 | 2021-01-13T10:15:35 |
[] | 0 |
Science
| 0 |
333
|
math
|
Progressive Dice Game
|
$(2019.)$ Edit: Rewriting the question to make it clear.
The progressive dice game
At the start, you have a fair, regular six sided dice $D=(1,2,3,4,5,6)$.
The game is played for $n$ turns. Each turn you make a roll, which will be $r\in D$.
Then to complete the turn, you make one of the following choices:
Bank the rolled number: You gain $r$ points (score).
Invest in (upgrade) the dice "evenly": If $r\lt 6$, choose $r$ sides and increase them by $1$ each. If $r\ge 6$, that is, $r=6k+k_0,k_0\lt 6$, then increase each of the six sides by $k$, and then choose $k_0$ sides and increase them by $1$ each.
Reroll the dice, effectively restarting this turn. But before rerolling, you must apply the penalty to the dice: "Evenly" downgrade the dice: If $S_0$ is the number of sides $\gt 0$ on the dice, then: If $r\lt S_0$, choose $r$ sides and decrease them by $1$ each. If $r\ge S_0$, that is, $r=S_0k+k_0,k_0\lt S_0$, then decrease each of the $S_0$ sides by $k$, and then choose $k_0$ sides and decrease them by $1$ each.
What is the optimal way to play to maximize your expected score at the end of the game?
If the dice was allowed to be upgraded/downgraded arbitrarily (not "evenly"), then one could downgrade the first five sides until they reach $0$. These sides now act as free rerolls. Then, keep investing the remaining points into the sixth side, which is now guaranteed to be rolled on each turn, after some amount of rerolls of that turn. Finally, bank that sixth rolled side in the last couple turns to maximize the expected value of the score.
But since we must upgrade/downgrade evenly, I'm not sure what is the optimal strategy.
If we ignore the "reroll" move:
If you upgrade the first $t$ turns, then bank the rest of the turns, you will expect the following amount of points on average:
$$ f(t) = 3.5\times\left(\frac{7}{6}\right)^{t}\times(n-t)$$
Which boils down to, that if you want to maximize your expected score, you should upgrade until the last $6$ (or $7$) turns and then bank those turns.
But this approach completely ignores the third action; the rerolls.
Can we do better than this strategy, if we use the rerolls somehow?
Rerolls?
I haven't worked out the strategy if the rerolls are considered.
A reroll will on average decrease the average value of the dice, and allow you to either improve or worsen your current turn, with equal probability on average?
But there seem to be exceptions? For example, rerolling a $1$ seems useful if used early (as later, if we had upgraded a lot, all sides will be much greater than $1$). Simply downgrade the rolled $1$ side when downgrading (rerolling). If you roll that side again, it will be $0$, and this allows you to again reroll the dice for free (downgrading $0$ points is a free reroll). Which means, choosing to downgrade when you roll a $1$, can only increase your expected score in that turn. But there is still a (small?) drawback: Lets say some other side is a number $\ge 6$. Then when upgrading later, you will have to put back at least one of those upgrade points into that downgraded $0$ side.
Seems to me that rerolls will decrease the expected score on average
(as they decrease the average value of the dice), so it is always
better not to use them (except in that early scenario of the game, if $n$ is small)? Is this true?
For example, for small $n$, rerolls can be useful to force larger values. For $n=1$ specifically, it seems we can always force the first turn (the only turn) to end up banking a $6$, the maximal possible score for $n=1$, by rerolling and downgrading strategically the rest of the sides if $6$ was not rolled.
But for large $n$, the rerolls seem to lower the average expected score at the end, if used anytime in previous turns, as larger upgrades will need to replenish those downgraded points inevetably at some point, as they are carried out "evenly".
|
https://math.stackexchange.com/questions/1789111
|
[
"probability",
"recreational-mathematics",
"game-theory",
"dice"
] | 20 | 2016-05-17T08:25:12 |
[
"As he described you could decrement 1 to 0 but the next time you decrement you have to decrement any other number before you can decrement the 1 which is 0 now",
"I think the increment/decrement rule can be described thusly: If you roll a $k$, then you add/subtract $1$ to/from the smallest/largest number currently on the die, and repeat this action $k$ times. Or do you want the player to be able to choose the sides whose numbers get changed? (If so, then any time you roll a $1$, you should decrement that side to a $0$, which effectively gives you a free roll.)",
"@joriki Yes, a reroll is a new turn, more precisely it is starting your turn over, but with a bit downgraded dice. Yes, by maximise I tought of maximising the expected value, or in other words the average value over repeated games. Yes it \"starts over\", downgrade points work the same way as upgrades, but the zero field is simply not being looked at. The point is that the points are evenly distributed, not for example stacked at one side, and then being taken from the rest of the sides leading to a $(0,0,0,0,0,x)$ dice with $x$ being doubled each turn, and zeroes used as free rerolls.",
"Your formula seems right, but would be a bit easier to understand if you write it as $3.5\\cdot\\left(\\frac76\\right)^t(n-t)$.",
"And for the downgrade: Does this also start over once you've taken a point off all sides? If there are sides with $0$, does the downgrade start over once you've taken a point off all non-zero sides, or do the zero sides count towards the points despite not being further downgraded?",
"You can't maximise your score, since this is a random variable. I suspect what you mean is to maximise the expected value of the score?",
"And in the downgrade you remove as many points as you'd rolled, and then effectively get a new turn, in which you have a new choice whether to bank, upgrade or downgrade?",
"@mjqxxxx You do one side by one point, then repeat for another side, till you use up all your points that were rolled, if you do all sides and have points left to spent, you do the same with the remaining points again. (Example; rolled $4$ = do $4$ sides, each by $1$ point, Example; $8$ = do all sides by one point, then do two sides by one point)",
"Can you be more specific about the meaning of \"[increment] its sides distributing the points one by one\" and \"downgrade the dice evenly taking one by one point\"? Are you incrementing/decrementing all sides by one? Just the smallest/largest side?"
] | 0 |
Science
| 0 |
334
|
math
|
Is $\Phi(q)$ rational for some $q \in \mathbb{Q}^*$, where $\Phi$ is the standard normal cumulative distribution function?
|
Suppose that we have rational numbers $q_1$, $q_2$ such that
$$\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{q_1}e^{-\frac{t^2}{2}} \,\mathrm{d}t=q_2.$$
Does this imply that $q_1=0$ and $q_2=\dfrac{1}{2}$?
|
https://math.stackexchange.com/questions/2248589
|
[
"number-theory",
"normal-distribution",
"transcendental-functions"
] | 20 | 2017-04-23T12:31:39 |
[
"We know that $ q_{2} = \\frac{1}{2} \\Big( 1 + \\text {erf} \\Big( \\frac{ q_{1} }{ \\sqrt{2} } \\Big) \\Big)$ and $ \\text{erf} (x) = 1 - \\frac{1}{ \\sqrt{ \\pi }} \\Gamma \\Big( \\frac{1}{2} , x^2 \\Big)$, so for $ q_{2} $ to be rational $ \\text {erf} \\Big( \\frac{ q_{1} }{ \\sqrt{2} } \\Big) $ must be zero. Therefore $ q_{1} = 0$ and $q_{2} = \\frac{1}{2}$.",
"I suspect $q_1$, $q_2$ can be even algebraic instead of rational (like in Lindemann-Weierstrass theorem showing that $\\sin x$ is transcendental)",
"Two different questions here: (a) $\\Phi(\\Phi^{-1}(1/4)) = 1/4$ and $q = \\Phi^{-1}(1/4) \\approx -0.6744898$ is a number with rational $\\Phi(q),$ with $q \\ne 0$, so Yes to the title. (b) However, I would not want to claim that $q$ is rational, so the main question remains unanswered."
] | 0 |
Science
| 0 |
335
|
math
|
Open problems in Federer's Geometric Measure Theory
|
I wanted to know if the problems mentionned in this book are solved. More specifically, at some places, the author says that he doesn't know the answer, for example :"I do not know whether this equations are always true" p.361 4.1.8, or "I do not know..." p.189 2.10.26. Are there counterexamples or proofs of these asumptions?
I add some details : in theorem 2.10.25, if $f:X\rightarrow Y$ is a lipschitzian map of metric spaces, $A\subset X$, $0\leq k<\infty$, and $0\leq m<\infty$, then
$$
\int_Y^*\mathscr{H}^k(A\cap f^{-1}\{y\})d\mathscr{H}^m(y)\leq (\mathrm{Lip} f)^m \frac{\alpha(k)\alpha(m)}{\alpha(k+m)}\mathscr{H}^{k+m}(A).
$$ (where $\mathscr{H}^n$ is the Hausdorff measure associated to the metrics on $X$ and $Y$, and $\int^*$ is the upper integral) provided either $\{y\in Y, \mathscr{H}^k(A\cap f^{-1})>0\}$
is the union of countable family of sets with finite $\mathscr{H}^m$ measure, or $Y$ is boundedly compact (each close bounded subset is compact). The question Federer asks is to determine if everything after "provided either..." is necessary.
The other question is about currents : let $S$, $T$ be two currents on open subsets $A$,$B$ of euclidean spaces, of degree $i$ and $j$ : if $S$, $T$ are representable by integration, do we always have $\Vert S\times T\Vert=\Vert S\Vert\times\Vert T\Vert$, and $\overrightarrow{S\times T}(a,b)=(\wedge_i p)\overrightarrow{S}(a)\wedge(\wedge_j q)\overrightarrow{T}(b)$ for $\Vert S\times T\Vert$ almost all $(a,b)\in A\times B$, where $p:A\rightarrow A\times B$ and $q:B\rightarrow A\times B$ are the canonical injections (it is indeed the case if either $\overrightarrow{S}$ or $\overrightarrow{T}$ is simple $\Vert S\Vert\times\Vert T\Vert$ almost everywhere).
|
https://math.stackexchange.com/questions/870912
|
[
"integration",
"measure-theory",
"geometric-measure-theory"
] | 20 | 2014-07-18T08:02:10 |
[
"It seems it has been proved in Measure theory and fine properties of functions, revised edition, by Evans.",
"Thanks, it would surprise me if there were no counter example or proof already available.",
"I'm a bit surprised, I have seen some discussion of Federer's text here before. Keep an eye on this from time to time, it may get an answer later.",
"Thank you anyway, I guess I'll have to find by myself. I really appreciate your help (I had no result either on math overflow).",
"well, I tried. Sorry.",
"I'll put a bounty on your question here once it's possible.",
"Thank you Mr Cook, I asked the question on math overflow and added the same détails here.",
"I wonder if you might get better input from math overflow on this one."
] | 0 |
Science
| 0 |
336
|
math
|
(Weil divisors : Cartier divisors) = (p-Cycles : ? )
|
Suppose $X$ verifies the suitable conditions in which Weil (resp. Cartier) divisors make sense.
The group of Weil divisors $\mathrm{Div}(X)$ on a scheme $X$ is the free abelian group generated by codimention $1$ irreducible subvarieties.
The sheaf of Cartier divisors is $\mathrm{Cart}_X:=\mathcal{K}_X^{\times}/\mathcal{O}_X^{\times}$.
The group of Cartier divisors is $\mathrm{Cart}(X):=\Gamma(X,\mathrm{Cart}_X)$.
The group $Z^p(X)$ of $p$-cycles is the free abelian group generated by irreducible subvarieties of codimension $p$, so in particular $\mathrm{Div}(X)=Z^1(X)$. So the notion of $p$-cycle is a direct generalization of the notion of Weil divisor.
My question:
Is there an analogous notion of group of "Cartier $p$-cycles" $\mathrm{Cart}^p(X)$? If yes, is there a sheaf $\mathrm{Cart}^p_X$ such that (naturally in $X$) we have $\mathrm{Cart}^p(X)=\Gamma(X,\mathrm{Cart}^p_X)$?
|
https://math.stackexchange.com/questions/226899
|
[
"algebraic-geometry",
"intersection-theory"
] | 20 | 2012-11-01T10:14:28 |
[
"As Matt said we can consider ideal sheaves locally generated by $p$ element. A variety of codimension $p$ satisfies this is called local complete intersection, so we don't get all $p$-cycles, but certainly we get the smooth ones since by the Jacobian criterion they smooth (sub)varieties are lci.",
"One naive attempt would be to say that Cartier divisors are \"locally principal.\" To get a codimension p object we'd need something like \"locally the zero set of p things\" which we could take to mean, lci or something. I have no idea how to make a sheaf that describes that, though."
] | 0 |
Science
| 0 |
337
|
math
|
Free medial magmas
|
A medial magma is a set $M$ with a binary operation $*$ satisfying $$(a*b)*(c*d) = (a*c)*(b*d)$$ for all $a,b,c,d \in M$. Medial magmas constitute a finitary algebraic category $\mathsf{Med}$, therefore there is a functor $M : \mathsf{Set} \to \mathsf{Med}$ which sends a set $X$ to the free medial magma $M(X)$ over $X$. Elements of $M(X)$ can be seen as equivalence classes of oriented non-empty binary trees whose leaves are marked with elements of $X$, where two such trees are equivalent if the one can be reached from the other by a finite number of steps, where each steps looks as follows:
Now I wonder if there is a more explicit description of the underlying set of $M(X)$, or a specific system of representatives. Can we simplify it? For example, when $X=\{\star\}$, we have no markings; what is a specific system of representatives? I also wonder if these free medial magmas are studied or used anywhere in the literature.
|
https://math.stackexchange.com/questions/271611
|
[
"abstract-algebra",
"category-theory",
"trees"
] | 20 | 2013-01-06T08:06:35 |
[
"The commutative version has been studied under the name of \"level algebras\", see for example arxiv.org/pdf/math/0209363v3.pdf.",
"Me neither. In the introduction to the second paper I linked there are a few remarks on free medial groupoids. There is something to the effect that \"It seems that there is no very nice description of the equational theory of medial groupoids and free medial groupoids.\" I can only see the first two pages, so I don't know if there is more in later sections.",
"Thank you. In fact this gives more search results. But unfortunately I couldn't find a paper which discusses free medial groupoids.",
"It seems that searching for \"medial groupoid\" instead of \"medial magma\" yields more results (in universal algebra groupoid is synonymous with magma). For instance, you can find the work of J. Ježek, T. Kepka and others who produced a number of papers on related structures and their representations, e.g. here (where the free cancellative and the free commutative medial magma are constructed) or here."
] | 0 |
Science
| 0 |
338
|
math
|
Is the Frog game solvable in the root of a full binary tree?
|
Frog game
The Frog game is the generalization of the Frog Jumping (see it on Numberphile) that can be played on any graph, but by convention, we restrict the game to Tree graphs (see wikipedia).
The game is simple to play, but it can be hard to determine if it is solvable in a given vertex.
In short, given a graph, the goal of the game is for all frogs to host a party on a single vertex of that graph. Initially, every vertex has one frog. All of the $f$ frogs on some vertex can "jump" to some other vertex if they both have at least one frog on them and if they are exactly $f$ edges apart from each other. If a "sequence of jumps" exists such that all frogs end up on a single vertex, then the party is successfully hosted on that vertex and we call that vertex a "lazy toad" because the frog that started there never jumped.
Formal rules
Let $v,w,u\in V$ be vertices of a graph $G=(V,E)$ and $n=|V|$ the number of vertices.
Let $f_m:V\to\mathbb N$ count the number of frogs on a given vertex where $m$ is the current declared move. A game takes $(n-1)$ moves to solve (if possible) and is played as follows:
We start the game on move zero $m=0$ where each vertex has one frog $f_0(v)=1,\forall v\in V$.
If it is move $m\lt n-1$ and the following conditions are met:
There exist two vertices $v,w\in V$ that have frogs on them $f_m(v)\ge 1$, $f_m(w)\ge 1$.
There exists a path from $v$ to $w$ containing exactly $f_m(v)$ unique edges.
Then, a legal move (or jump) can be made and is denoted as $(v\to w)$. If the move is made, then the following transitions occur:
All frogs jump from $v$ to $w$, that is, $f_{m+1}(v)=0,f_{m+1}(w)=f_m(v)+f_m(w)$.
The remaining frogs don't move, that is, $f_{m+1}(u)=f_{m}(u),\forall u\not\in\{v,w\}$.
We declare the next move $m+1$. (The game ends if we can't move.)
We say that the game is solvable in some vertex $w$ if there exists a sequence of legal moves such that we end up moving all frogs to the vertex $w$, which means $f_{n-1}(w)=n$. Consequently, if the game is solvable in $w$ then $f_{n-1}(v)=0,\forall v\ne w$. The solvable vertex $w$ is called a "lazy toad" because the frog on the vertex $w$ never jumped during the game.
For some examples, you can visit a post about this game on mathpickle. There is also an included link to a google drive where all trees with less than $15$ vertices have been computationally solved and categorized by the number of vertices then by the number of non-solvable vertices.
Remark. I've asked a general question about this game before Frogs jumping on trees, which asks if we can characterize solvable and non-solvable vertices of certain sets of graphs, but that appears to be a hard problem. Here, I've decided to restrict the problem to binary trees and only observe the root vertex.
Question
Let $T_h=(V,E)$ be a full binary tree of height $h$. This means that it has layers $0,1,2,\dots,h$ where the root $r\in V$ is on layer $0$. That is, we have $|V|=2^{h+1}-1$. Let $h\in \mathbb N$ because $h=0$ is a single vertex which is trivial.
I would conjecture that for all $h\ge 4$, every vertex of $T_h$ is a "lazy toad" (is solvable).
However, for simplicity, my question is just about the root vertex:
Let $h\in\mathbb N, h\ne 2$. Can we prove that the root $r$ of every such $T_h$ is a "lazy toad" (is solvable) ?
Notation
Let $v(i,j)\in V$ be the $j$th non-root vertex on the layer $i=1,2,\dots,h$ where $j=1,2,\dots,2^i$.
Lets declare a shorthand notation for moves:
"$(v\to w)$ then $(w\to u)$" to be equivalent to "$((v\to w)\to u)$".
"$(v\to w)$ and $(u\to w)$" to be equivalent to "$(v\to w \leftarrow u)$".
For clarity, let $f=f_m(v)$ in "$\left(v\xrightarrow{f} w\right)$" stand for the number of frogs being moved.
Alternative suggestions for notation are welcome.
Examples (Reducing $T_3,T_4,T_5$ to $T_1$)
The fact that we can reduce examples to smaller examples makes me think that maybe induction is possible. Notice that $T_1$ was solved in the original version of the game where all vertices are on a line and that $T_2$ is not solvable, so $T_3$ is the first "real" case.
$$(h=1)$$ The trivial case $T_1$ is trivially solvable in all vertices:
[1.1] In $r$ using legal moves $v(1,1)\xrightarrow{1} r$ and $v(1,2)\xrightarrow{1} r$ in any order.
[1.2] In $v(1,1)$ by using moves $\left(r\xrightarrow{1} v(1,2)\right)\xrightarrow{2} v(1,1)$.
[1.3] In $v(1,2)$ by using moves $\left(r\xrightarrow{1} v(1,1)\right)\xrightarrow{2} v(1,2)$.
$$(h\ne 2)$$ The second case $T_2$ is not solvable in the root. This can be checked by listing all move sequences.
$$(h=3)$$ Lets break down the $T_3$ case. This case can be looked as a "base" trivial $T_1$ case that has two trivial $T_1$ cases "connected" to every leaf vertex. Let the vertices of those four "connected" trivial cases $T^{(1)}_1,T^{(2)}_1,T^{(3)}_1,T^{(4)}_1$ be indexed as: $v_1,v_2,v_3,v_4$ and let the vertices of the "base" trivial $T^{(0)}_1$ case be indexed as $v_0$.
That is, we can observe the $T_3$ as "composition" of $T_1$'s:
Now we can see that the solution in the root of $T_3$ follows from the solution of $T_1$. That is:
solve the "connected" $(T^{(i)}_1,i\in\{1,2,3,4\})$'s in a leaf vertex using [1.3]: $$\left(r_i\xrightarrow{1} v_i(1,1)\right)\xrightarrow{2} v_i(1,2),$$
then do the moves $v_i(1,2)\xrightarrow{3} r_0$ for $i\in\{1,2,3,4\}$ to transfer those frogs to root $r_0$,
then finally just solve the base $T^{(0)}_1$ with [1.1]: $v_0(1,1)\xrightarrow{1} r_0$ and $v_0(1,2)\xrightarrow{1} r_0$.
That is, the $T_3$ case is solved using the solutions of the trivial $T_1$ case.
In other words, if we know the solutions of $T_1$, we can reduce the $T_3$ to $T_1$ again.
$$(h=4)$$ Similarly as in the previous case, we can observe $T_4$ as "composition" of two connected $T_2$'s to each of the two leafs of one "base" $T_1$. That is, we "reduce" the $T_4$ to $T_1$ by "solving" the four $T_2$'s by moving all frogs from their vertices to the root of the "base".
We can "reduce" ("solve") $(T^{(i)}_2,i\in\{1,2,3,4\})$'s using following two move sequences:
$v_i(2,1)\xrightarrow{1} v_i(1,1)$ and $v_i(2,2)\xrightarrow{1} v_i(1,1)$, then $\left(v_i(1,1)\xrightarrow{3} v(2,4)\right)\xrightarrow{4} r_0$.
$r_i\xrightarrow{1} v_i(1,2)$ and $v_i(2,3)\xrightarrow{1} v_i(1,2)$, then $v_i(1,2)\xrightarrow{3} r_0$.
The vertices from the first (second) move sequence are colored blue (green) here in the $T_2^{(i)}$:
After the reduction, we are left with the "base" $T_1$ which is solved with [1.1] from the trivial case. In other words, we essentially moved all frogs from layers $2,3,4$ of $T_4$ to the root, leaving us only with layers $0,1$ which is equivalent to $T_1$.
$$(h=5)$$ Similarly to previous two cases, this $T_5$ case can be reduced to $T_1$ by solving four $T_3$'s. That is, we can reduce the layers $2,3,4,5$ by moving all their frogs to the base root. To do this, there are three separate move sequences highlighted in the following image (each puts $5$ frogs in a leaf vertex before moving the frogs to the base root):
$\left(\left(v_i(2,1)\xrightarrow{1} v(3,1)\right)\xrightarrow{2} v_i(1,1)\xleftarrow{1} r_i\right)$, then $\left(v_i(1,1)\xrightarrow{4}v_i(3,8)\right)\xrightarrow{5}r_0.$
$\left(v_i(3,3)\xrightarrow{1} v_i(2,2)\leftarrow{1} v_i(1,2)\right)$, then $\left(\left(v_i(2,2)\xrightarrow{3} v_i(1,2)\right)\xrightarrow{4}v_i(3,2)\right)\xrightarrow{5}r_0.$
$\left(v_i(3,5)\xrightarrow{1} v_i(2,3)\xleftarrow{1} v_i(3,6)\right)\xrightarrow{3} v_i(3,7)$, then $\left(v_i(2,4)\xrightarrow{1}v_i(3,7)\right)\xrightarrow{5}r_0.$
$$(h\ge 6)$$ These examples so far motivate the following question:
Is the reduction from $T_h$ to $T_1$ by solving four $T_{h-2}$'s always possible?
This could be one way to solve the problem. On the other hand, are there any other reductions?
Reduction pattern based on $T_6,T_7$
Since reductions from $T_h$ to $T_1$ over $T_{h-2}$'s seem to get more complex as $h$ grows, this begs the question if there are any other simpler reductions.
It is sometimes possible to reduce $T_h$ to $T_{h-2}$ over $T_1$'s. Take a look at the following examples.
Let $k=0,1,2,\dots,2^{h-2}-1$. If we look at the $T_6$, we can reduce its top two layers (reuducing it to $T_{4}$) using the following:
$v(6,4k+1)\xrightarrow{1} v(5,2k+1)$ and $v(6,4k+2)\xrightarrow{1} v(5,2k+1)$,
$\left(v(5,2k+2)\xrightarrow{1} v(6,4k+3)\right)\xrightarrow{2} v(6,4k+4)$,
$\left(v(5,2k+1)\xrightarrow{3} v(6,4k+4)\right)\xrightarrow{6} r$.
Similarly, we can reduce the $T_7$ to $T_5$ by using steps 1., 2. from above but modifying step 3. as (Note that you also modify the layers from $5$ to $6$ and $6$ to $7$ in all steps, of course.)
$\left(v(7,4k+4)\xrightarrow{3} v(6,2k+1)\right)\xrightarrow{6} r$.
We can ask if there are any other trees $T_h$ whose top two layers can be reduced.
Notice that we are collecting $T_1$'s in this reduction, that they contain $3$ vertices (frogs) each and that the total number of used $T_1$'s in our move sequence(s) must be a power of $2$ to completely reduce the top two layers, where we have $2$ possible layers $\{h,h-1\}$ to put the frogs on before jumping to root. Because of this, it is meaningful to try to extend this pattern to the full binary trees of the following kind:
If $h\in\{3\cdot2^t,3\cdot2^t+1\},t\in\mathbb N$, for which $t$'s is such $T_h$ reducible to $T_{h-2}$?
This now does smell like something that would be used in an inductive proof. But, even if this pattern holds for all $t$, it alone is not enough. The question is, can we find a sufficient set of reduction patterns?
Let $s\lt h$ and $h\gt 2$. Is it true that every $T_h$ can be reduced to some $T_s$ ?
Trivial reduction pattern
If $h=2^t+t-2,t\ge 2$ and $T_{t-1}$ is solvable in root, then $T_h$ can be reduced to $T_{h-t}$.
This is trivially true because the distance of the root of "$T_{t-1}$ that spans the top $t$ layers" to the "(base) root $r=r_0$", is precisely equal to $2^t-1$, which is the total number of frogs in $T_{t-1}$.
For example, if $t=4$ we have $h=18$. Solving top $t=4$ layers as a $(T_{t-1}=T_{3})$'s in root, puts $15$ frogs on a vertices that are on layer $15$. These frogs can jump to the root of $T_h$ which leaves us with $T_{h-t}=T_{14}$.
Both this and the previous reduction pattern are exponential and are not enough to cover all values of $h$ in the inductive argument. That is, we need more statements like this to cover all values.
Other examples
I've managed to find reductions for $T_h$ for $h$ up to $20$ by hand to show they are all solvable in the root, but I still don't see how I could complete the inductive argument or solve this problem in some other way for all $h\ne 2$.
For example, I can reduce $T_8,T_9,T_{10},T_{11}$ to $T_5,T_5,T_6,T_7$ respectively.
Then, $T_{12},T_{13}$ belong to $t=2$ from the above reduction pattern for $h\in\{3\cdot2^t,3\cdot2^t+1\}$ and can be reduced to $T_{10},T_{11}$. It is not as easy as $t=1$, but the move sequences do exist.
After those, I can reduce $T_{14},T_{15},T_{16},T_{17}$ to $T_{11},T_{11},T_{12},T_{13}$.
Then, $T_{18}$ belongs to $t=4$ of the trivial reduction pattern which means it can be reduced to $T_{14}$.
For $T_{19}$ I found a reduction to $T_{14}$,... and so on.
Note that these reductions are not the only ways to solve a tree in the root.
Necessary reduction condition
Notice that to reduce $T_h$ to $T_{h-a}$ for some $a\ge 2$, we need to move all frogs from top $a$ layers to the root. The number of frogs we are moving to the root in such reduction is $(2^a-1)2^{h-a+1}$. That is, we are shaving off the $T_{a-1}$'s at the top layers. For such a reduction to be possible, there must exist a set of moves that transfers the frogs to the root $r$
$$M_r=\{v(i,j)\xrightarrow{f_{i,j}}r\},\text{ where } i\in\{h,h-1,\dots,h-a+1\}.$$
Assuming such moves exist at some point in the game, they are legal only if $f_{i,j}=i$ where $i$ is the layer on which the vertex $v(i,j)$ is on. On the other hand, the sum of $f_{i,j}$'s must be equal to the total number of frogs that we are reducing (moving to root). That is, for $M_r$ to be able to exist, we necessarily need to be able to partition the number of frogs we are moving onto the corresponding layers, and this is essentially what we state in the necessary condition.
We also care if our move sequences repeat or have patterns (for example, see the use of $k$ in the $T_6,T_7$ reduction pattern), because then instead of considering all frogs at once, we can consider only $(2^a-1)2^{b},b\in[0,h-a+1]$ frogs for some $b$, and repeat the same move pattern $(2^{h-a+1}/2^b)$ times.
Based on all of this, we can state the following necessary condition:
(Necessary condition for reduction). Let $h\gt2,a\ge2$. If reduction from $T_h$ to $T_{h-a}$ is possible, then there exists $b\in[0,h-a+1]$ and a partition of the number $(2^a-1)2^b$ into the layers $H=\{h,h-1,h-2,\dots,h-a+1\}$ where $(h-l)\in H$ is used at most $(2^{a-l-1})2^b$ times.
Of course, the converse does not hold because this is only a necessary (not a sufficient) condition.
For an example, if we want a $T_h$ to $T_{h-2}$ reduction then we need to partition $3\cdot 2^b$ into $\{h,h-1\}$. If we assume that the solution is "simple", i.e. that all frogs are either on layer $h$ or layer $h-1$ (but not mixed between both layers), we get the following pattern:
$h$ must either be of form $3\cdot 2^t$ or $3\cdot 2^t+1$. This was essentially our assumption on $h$ when we talked about extending the $T_6,T_7$ pattern. But now we also need a sufficient condition. That is, if we want to prove this is possible for every $t$, then we need to for every $t$ find a sequence of corresponding legal moves or prove it exists.
Solvable partitions?
Surprisingly, the above condition is already very useful (efficient). I have that for all $h\in(2,20)$ I've solved so far, I needed to consider only $31$ or less vertices ($a\le 5,b\le 3$) to show that all vertices can be reduced to the root (that we can send all frogs to the root).
For example, the $T_{19}$ has over a million vertices! But, to solve it, one needs to consider just one $T_4$ (which has only $31$ vertices) at the top layers to find a solution! (Consider $a=5,b=0$ and the corresponding partition $31=15+16$ is solvable.) That is, the reduction pattern of that $T_4$ can simply be repeated on all adjacent $T_4$'s in the top five layers, which gives us a reduction to $T_{14}$ which we have already solved by reduction to $T_{11}$ then to $T_5$ then to $T_1$.
Essentially, we need to prove that at least one of the partitions given by the "(Necessary condition for reduction)" always allows the reduction to a smaller binary tree (that the converse holds for at least one partition). Then by induction, we have that they are all reducible to $T_1$.
I don't yet see how to construct move sequences for all cases of $h$. Alternatively, I do not know if we can maybe prove it without explicit construction of solutions?
|
https://math.stackexchange.com/questions/3800570
|
[
"graph-theory",
"induction",
"recreational-mathematics",
"trees"
] | 19 | 2020-08-23T03:56:44 |
[
"Since the bounty didn't lead to anything new, its now crossposted to mathoverflow.net."
] | 0 |
Science
| 0 |
339
|
math
|
Is there an irrational number containing only $0$'s and $1$'s with continued fraction entries less than $10$?
|
The number $$0.10111001001000000000001$$ has continued fraction $$[0, 9, 1, 8, 9, 5, 1, 1, 5, 3, 1, 3, 1, 1, 4, 6, 1, 1, 8, 2, 5, 8, 1, 9, 9, 5, 2
, 8, 1, 1, 6, 4, 1, 1, 3, 1, 3, 5, 1, 1, 5, 9, 8, 1, 9]$$
So, the maximum values is $9$. But we are only at $23$ digits. Can we produce larger decimal expansions with the required property ? Perhaps arbitary large ones ?
Is there an irrational number, such that the decimal expansion contains only ones and zeros and the continued fraction contains no entry larger than $9$ ?
|
https://math.stackexchange.com/questions/2104883
|
[
"number-theory",
"irrational-numbers",
"continued-fractions"
] | 19 | 2017-01-19T10:25:18 |
[
"A shorter example without long sequences is $$0.1011010011100100111101101110101$$",
"But I would prefer sequences without such long $0$-sequences, if possible :)",
"$$0.1011010000001000000010000000000000$$ $$0000000010000000000000000000000000000000000000 0000001$$ is far better with $87$ digits! Seems that we can continute forever, if we add a digit $1$ and then insert enough $0$'s between the last two ones ...",
"(+1) An interesting question on the intersection of Cantor-like sets."
] | 0 |
Science
| 0 |
340
|
math
|
Increasing derivatives of recursively defined polynomials
|
Consider recursively defined polynomials $f_0(x) = x$ and $f_{n+1}(x) = f_n(x) - f_n'(x) x (1-x)$.
These polynomials have some special properties, for example $f_n(0) = 0$, $f_n(1) = 1$, and all $n+1$ roots of $f_n$ are in $[0,1)$.
Let $x_n$ denote the largest root of $f_n$. Then $f_n(x_n) = 0$ and $f_n'(x_n)>0$. Moreover, $x_n > x_{n-1}$ for all $n$.
I want to prove the following claim: $f_{n}'(x_{n+1}) > f_{n-1}'(x_{n+1})$ for all $n \geq 2$.
Note that the claim does not hold for arbitrary $x$. The derivatives are polynomials themselves, by Gauss-Lucas theorem all their roots are in $[0,x_n)$ and there are many points where $f_{n}'(x) < 0 < f_{n-1}'(x)$. However, I am quite sure that at $x \geq x_{n+1}$, the derivatives are ordered: $f_1'(x) < \dots < f_n'(x)$.
Some of the first polynomials are:
$f_1(x) = x^2$, $f_1'(x) = 2x$, $x_1 = 0$
$f_2(x) = 2x^3 - x^2$, $f_2'(x) = 6 x^2 -2x$, $x_2 = \frac{1}{2}$
$f_3(x) = 6 x^4 - 6x^3 + x^2$, $f_3'(x) = 24x^3-18x^2+ 2x$, $x_3 \approx 0.7887$
$f_4(x) = 24 x^5 - 36 x^4 + 14 x^3 -x^2$, $f_4'(x) =120x^4-144x^3 +42x^2 -2x$, $x_4 \approx 0.9082$
Therefore $f_2'(x)-f_1'(x) = 6x^2-4x \geq 0$ for all $x \geq \frac{2}{3}$. Note that, $x_2 < \frac{2}{3} < x_3$.
For $f_3'(x) - f_2'(x) = 24 x^3-24 x^2 + 4 x \geq 0$ for all $x \geq 0.7887$. Here it turns out that at $x_3$ the inequality holds as an equality (coincidence perhaps?), but of course then for $x_4 > x_3$ it holds as a strict inequality.
|
https://math.stackexchange.com/questions/1887362
|
[
"real-analysis",
"polynomials",
"recurrence-relations",
"roots"
] | 19 | 2016-08-09T10:49:41 |
[
"That's right, the claim does not hold for $n=1$. What I wanted was for all $n \\geq 2$ (in my application $f_0$ is irrelevant, I'm just using it for construction). Sorry for the confusion. I'm updating the question.",
"The claim is false since $f'_1(x_2)=f'_0(x_2)$.",
"Just a conjecture, coming from looking at the plots for the first 5-6 cases.",
"Why are you quite sure the derivatives are ordered at $x \\geq x_{n+1}$?"
] | 0 |
Science
| 0 |
341
|
math
|
Calculate using residues $\int_0^\infty\int_0^\infty{\cos\frac{\pi}2(nx^2-\frac{y^2}n)\cos\pi xy\over\cosh\pi x\cosh\pi y}dxdy,n\in\mathbb{N}$
|
Q: Is it possible to calculate the integral
$$
\int\limits_0^\infty \int\limits_0^\infty\frac{\cos\frac{\pi}2
\left(nx^2-\frac{y^2}n\right)\cos \pi xy}{\cosh \pi x\cosh \pi y}dxdy,~n\in\mathbb{N}\tag{1}
$$
using residue theory?
For example, when $n=3$
$$
\int\limits_0^\infty \int\limits_0^\infty\frac{\cos\frac{\pi}{2}
\left(3x^2-\frac{y^2}{3}\right)\cos \pi xy}{\cosh \pi x\cosh \pi y}dxdy=\frac{\sqrt{3}-1}{2\sqrt{6}}.
$$
There is a closed form formula to calculate (1) for arbitrary natural $n$, but I don't know how to do it by residue theory. Maybe it is possible in principle, but is residue theory practical in this particular case? It seems such an approach would lead to a sum with $O(n^2)$ terms. Any hints would be appreciated.
|
https://math.stackexchange.com/questions/2104333
|
[
"integration",
"complex-analysis",
"definite-integrals",
"contour-integration",
"residue-calculus"
] | 19 | 2017-01-19T02:28:47 |
[
"You mentioned that there is a closed form formula. Would you care to provide it?",
"@AlfredYerger I know $I_{1,2}$ can be evaluated with residue theory. It has been done by Ron Gordon, for exampe, here on MSE. I believe his method can be applied to the case $n=3$ by calculating the integral over $x$ first, then similarly the remaining integral over $y$.",
"This is very interesting. So you would be content if the integrals $I_1$ and $I_2$ just below (26) in this paper would be evaluated with residue theory? arxiv.org/pdf/1712.10324.pdf",
"@AlfredYerger please see my paper at arxiv regarding 2D Mordell integrals.",
"It is also possible the LM estimates one typically sees are too coarse for this integral, and something better is needed to show that the integral along such an arc goes to $0$ as $R \\to \\infty$, but I don't have any ideas right now.",
"In particular, the place I got stuck is at estimating $| \\int_\\gamma \\frac{\\cos \\frac{\\pi}{2} u^2 \\cos \\pi u v}{\\operatorname{cosh} \\pi u/\\sqrt{n}}|$ which I obtained by making the change of variables $u = \\sqrt{n}x$ and $v = y/\\sqrt{n}$, using the angle subtraction formula, distributing, and then pulling out some terms depending only on $v$ from the first integral. This integral is of an even function, so we can extend to an integral over the whole real line. If the LM estimates for a semi-circular arc in the upper half plane could be made to $\\to 0$, we could try again to calculate residues.",
"Has the case $n=3$ been done with Residue theory? I spent some time thinking about this evening and I don't see how it can be possible. There is no residue theory to my knowledge for double integrals, so you must do a residue calculation on one or both of the iterated integrals, but after some algebra, I see no way to make the LM estimates work, so I don't see how it can be possible. If the LM estimates could be worked out, I would be willing to try again.",
"@Nicco I don't mind",
"@ Nemo:Based on residue theory.Unfortunately my browser can't display the formulas in your blog",
"@ Nemo: would you mind if I can offer a bounty for this question?",
"thx, that is cool stuff...are do you doing this on a recreational basis or is this for professional purposes?",
"@tired it is not due to Ramanujan.",
"can you give a reference for the amazing formula $n=3$? i bet it is due to ramanujan"
] | 0 |
Science
| 0 |
342
|
math
|
When does $x^{x^{x^{...^x}}}$ diverge but $x^{x^{x^{...^c}}}$ converge?
|
Let us define these two sequences as follows:
$a_0=1$, $b_0=c$
$a_{n+1}=x^{a_n}$, $b_{n+1}=x^{b_n}$
$b_{n+1}\ne b_n$ for any $n$.
$x,c\in\mathbb C$
Is it possible for $a_n$ to diverge but $b_n$ to converge under these conditions? For example, if $x=2$ and $c=i$, we have
$b_1=2^i=\operatorname{cis}(\ln2)$
$b_2=2^{\operatorname{cis}(\ln2)}=2^{\cos(\ln2)}\operatorname{cis}(\ln2\sin(\ln2))$
etc.
I haven't much clue as to whether it is the case that $b_n$ can converge when $a_n$ diverges, and I can hardly work out if $b_n$ converges with $x=2$ and $c=i$.
|
https://math.stackexchange.com/questions/2076008
|
[
"sequences-and-series",
"exponentiation",
"tetration",
"complex-dynamics"
] | 19 | 2016-12-29T05:50:35 |
[
"@miloBrandt b=exp(1/e) has a neutral fixed point=e, and iterating b^^n converges towards e.",
"One can work out that the only times that $x^{\\ldots^c}$ can converge without hitting a fixed point is whenever $W(-\\log(x))$ has absolute value at most $1$, where $W$ is the product log (and there are definitely converging $c$ when this absolute value is strictly less than $1$). I would maybe bet that there's an example of this happening where the absolute value is exactly $1$ and that where it's less than $1$, the tower converges starting at $1$.",
"@SheldonL If you still wish to I would be interested in seeing your numerical results.",
"@SheldonL: if you'll get something substantial - I'd like to see a workout also in our tetrationforum! (It seems to be an interesting facet in context with the D. Shell article)",
"Even considering complex values of x, I believe the surprising result is that there are no solutions! I wrote a pari-gp loop that verified this for over 300,000 real and complex bases with attracting fixed points .... I think this is the case since I think the domain of the basin of attraction of the attracting fixed point can be extended to f'=0; but I will need to do some reading to brush up on complex dynamics, and iterated functions.",
"@SheldonL Oddly, though I am asking such questions, such vocabulary is new to me. But it's ok, I get the gist of what you are saying. If you can think up a worthy answer, go for it and I'll ask clarifications later. And do complex fixed points behave like real ones? It seems quite hard to discern",
"@SimpleArt ouch!, but I think. $0<b<e^{-e}$ has repelling fixed points.... negative bases? complex bases? it might be easier to work with iterating $z \\mapsto \\exp(z)+k;\\;\\;\\;k=\\ln(\\ln(b))$ which is congruent to $y \\mapsto b^y;\\;\\;\\;y=(z-k)/ln(b)$",
"@SheldonL Hm, I'm not entirely sure. I'm also wondering about $x<e^{-e}$.",
"hmmm, for real values of x, the first requirement is that x>exp(1/e) so that the first power tower diverges. But all complex fixed points for bases>exp(1/e) are repelling. So doesn't that mean the 2nd power tower won't converge? And therefore there are no real valued solutions to the Op's problem?",
"Hmm, the clause $c \\ne x^c$ intends to exclude the \"trivial\" case of $c$ being the fixpoint. But consider cases , where the exponentialtower to base $x$ has a set of cyclic fixpoints (which might also be accumulation points such that the height-increasing exponentialtower of $b_n$ \"converges\" to a cycle over such a set of points) then you might want to add an analogue clause/to extend the given clause. (I've not yet a true answer to your question so far)",
"Then the second paper may be useful.",
"@Rohan I'm not so sure, both papers cover the topic of my sequence $a_n$, which I already know about, I'm more interested in $b_n$.",
"I hope this and this will help you."
] | 0 |
Science
| 0 |
343
|
math
|
Consecutive prime numerators of harmonic numbers?
|
Let
$$\frac{1}{1}+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{n}=\frac{a}{b}$$
and let $a$ and $b$ are coprime, $h_{n}=a$.
$h_{n}$ is prime for
$$n=2,3,5,8,9,21,26,41,56,62,69,79,89,91,122,127,143,167,201,230,247,252,290,349,376,459,489,492,516,662,687,714,771,932,944,1061,1281,1352,1489,1730, 1969,2012,2116,2457,2663,2955,3083,3130,3204,3359,3494,3572,3995,4155,4231,4250,4496,4616,5069,5988,6656,6883,8067,8156,8661,9097,\ldots$$
I guess proving that there are infinitely (or finitely) many primes of the form $h_n$ is very hard. But can we prove both of the $h_n$ and $h_{n+1}$ cannot be prime for $n>8$?
|
https://math.stackexchange.com/questions/1640772
|
[
"prime-numbers",
"harmonic-numbers"
] | 19 | 2016-02-04T11:16:53 |
[
"@vrugtehagel You're welcome.",
"@Mr. Brooks, no, thank you. That is be some very useful information",
"@vrugtehagel Did you look in oeis.org/A056903/b056903.txt ? They call it a \"b-file\" and it goes up to $h_{97} = 78128$.",
"I haven't looked much into it before. The only thing I've done on this subject is prove that the harmonic series is never an integer for $n>1$. But this is interesting, so I'll definitely look into it",
"@vrugtehagel : what do you already know about the numerators of the harmonic numbers ?",
"I edited the question to include some more values for $n$, more than oeis.org, which for some reason only includes values up to $3572$",
"Just for reference, the first values of $h_n$ are listed here: oeis.org/A001008",
"What sort of question is this? I think it isn't trivial. Where is the conjecture from, just looking on the numbers or are there reasons why it should be right?"
] | 0 |
Science
| 0 |
344
|
math
|
An elegant description for graded-module morphisms with non-zero zero component
|
In an example I have worked out for my work, I have constructed a category whose objects are graded $R$-modules (where $R$ is a graded ring), and with morphisms the usual morphisms quotient the following class of morphisms:
$\Sigma=\left\lbrace f\in \hom_{\text{gr}R\text{-mod}}\left(A,B\right) \ | \ \ker\left(f\right)_0\neq 0, \ \mathrm{coker}\left(f\right)_0\neq 0\right\rbrace$
(by quotient I mean simply that this class of morphisms are isomorphisms, thus creating an equivalence relation) I am wondering if this category has a better (more canonical) description, or if I can show it is equivalent to some other interesting category.
Thanks!
|
https://math.stackexchange.com/questions/730
|
[
"modules",
"graded-modules",
"graded-rings"
] | 19 | 2010-07-26T10:29:10 |
[
"Wow this is the first non-deleted question! Good job.",
"Well, I don't remember. I do remember that I was localizing by these morphisms and that was what I meant by quotienting. Judging by the date, I was probably trying to work out a ncag example. I don't have a clue what it was!",
"@MarianoSuárez-Alvarez I was trying to get the altruist badge, so I found the oldest unanswered question and put a bounty on it. It makes sense that this has been unanswered for three years...",
"This is very weird, really: you are making all modules with non-zero zero component isomorphic... Does this really make sense in some context?",
"Do you mean to make the maps in $\\Sigma$ all the zero maps (which is quotienting) or do you mean to make the maps in $\\Sigma$ isomorphisms (which is localizing)? As $\\Sigma$ is not an ideal the first isn't well defined, and as $\\Sigma$ can contain the zero map between modules the second doesn't seem to make much sense either.",
"Since $\\Sigma$ isn't an ideal in the category of graded $R$-modules, I don't think that quotienting by it makes much sense."
] | 0 |
Science
| 0 |
345
|
math
|
Hopf-like monoid in $(\Bbb{Set}, \times)$
|
I am looking for a nontrivial example of the following:
Let a monoid $A$ be given with unit $e$, and two of its distinguished disjoint submonoids $B_1$ and $B_2$ (s.t. $B_1\cap B_2=\{e\}$), endowed with monoid homomorphisms $\Delta_i:A\to B_i$ such that $\Delta_i|_{B_i}={\rm id}_{B_i}\ $ ($i=1,2$) .
Moreover, also let a mapping $\sigma:A\to A$ be given, such that for all $a\in A$:
$$\sigma(\Delta_1(a))\cdot\Delta_2(a)=e= \Delta_1(a)\cdot\sigma(\Delta_2(a)) $$
In particular, each element of $B_1$ is right invertible, and each element of $B_2$ is left invertible in $A$.
Such an example is trivial if $B_1=B_2=A$.
Motivation: I am looking for Hopf-like monoids in cartesian categories, in which the counit axioms for the unique $A\to 1$ morphism are not posed. Then, the comultiplication map $\Delta:A\to A\times A$ has to be of the form $\Delta=(\Delta_1,\Delta_2)$ and coassociativity means ${\Delta_i}^2=\Delta_i$ and $\Delta_1\Delta_2=\Delta_2\Delta_1$. $B_1\cap B_2=\{e\}$ ensures this latter equality.
Of course, $\sigma$ would be the antipode.
|
https://math.stackexchange.com/questions/292279
|
[
"examples-counterexamples",
"monoid",
"hopf-algebras"
] | 19 | 2013-02-01T10:40:31 |
[] | 0 |
Science
| 0 |
346
|
math
|
Integral involving Complete Elliptic Integral of the First Kind K(k)
|
I have run into an integral involving the complete elliptic integral, which can be put into the following form after changing integration variables to the modulus:
$$\int_0^{\sqrt{\frac{\alpha}{1+\beta}}} dk\, \frac{ k^{11} K(k) } {\sqrt{(\alpha-\beta k^2)^2 - k^4} (\alpha - \beta k^2)^{11/2}}$$
$K(k)$ is the complete elliptic integral of the first kind, where $k$ is the modulus. We can assume that $\alpha$ and $\beta$ are such the maximum value for $k$ is less than or equal to $1$. Are there any ways to get a closed form solution out of this? The indefinite integrals in G&R are not much help.
|
https://math.stackexchange.com/questions/52933
|
[
"integration",
"special-functions",
"elliptic-integrals"
] | 19 | 2011-07-21T09:47:28 |
[
"+1 for remembering to mention your argument convention for elliptic integrals. It looks a bit gnarly as it stands, but I'll see what I can do."
] | 0 |
Science
| 0 |
347
|
math
|
Expected number of operations on a vector until one of the coordinates becomes zero.
|
Let's say we have a vector $v = (x_1, ..., x_n) \in \mathbb{N}^n$ where $x_1 = x_2 = ... = x_n$. Next we choose an ordered pair of coordinates at random $(i, j)$ where $i, j \in \\{1, ..., n\\}$ and $i \neq j$. Finally we substitute the vector $v$ with a new vector $v' = (x_1, ..., x_i + 1, ..., x_j - 1, ..., x_n)$. Now we choose again an ordered pair of coordinates at random and substitute the vector $v'$ with a new vector doing the same we did for $v$. We continue doing this until one of the coordinates becomes zero.
What is the expected number of operations we are going to make?
I know the answer for $n = 2$ because you can model this process with a random walk. If $v = (x, x)$, then the expected number of operations is the same as the expected number of steps it will take to hit $x$ or $-x$ doing a random walk starting at zero. In this case the expected number of step starting at $y$ satisfies the recurrence relation $$E_y = 1 + \frac{1}{2} E_{y - 1} + \frac{1}{2} E_{y + 1}. $$ Then one can solve this linear recurrence.
I tried to do the same for the original problem but the recurrence relation is more difficult. Let $F_{(x_1, ..., x_n)}$ be the expected number of operations one can make to vector $v= (x_1, ..., x_n)$ before one of the coordinates becomes zero (in this case we allow $x_1, ..., x_n$ to be different). If I'm not wrong $F$ satisfies the following relation $$F_{(x_1, ..., x_n)} = 1 + \sum_{i, j} \frac{1}{n (n - 1)} F_{(x_1, ..., x_n) + e_{i, j}}, $$ where the $i$-th coordiante of $e_{i, j}$ is $1$, the $j$-th is $-1$ and the rest are all zero (the sum runs through all possible operations).
|
https://math.stackexchange.com/questions/2675605/expected-number-of-operations-on-a-vector-until-one-of-the-coordinates-becomes-z
|
[
"probability"
] | 13 | 2018-03-03T16:41:57 |
[
"I think we can project the movement to hyperplane $x_1+\\cdots+x_n=nN$, where $N=x_1=\\cdots=x_n$ is the initial coordinate of $v$. With the stopping condition(i.e. stop when some coordinate becomes zero) we are on a bounded domain, with each transition $(0,\\cdots,1,\\cdots,-1,\\cdots,0)$ parallel to some boundary line $x_k=0,k\\not\\in\\{i,j\\},x_i+x_j=nN$. Seem like we can reduce the dimension...will this help?",
"Note that raising operators do not modify the total weight of vectors. Moreover, whenever $a$ and $b$ are of equal weight and $a\\leqslant b$, there is a raising operator $R$ such that $Ra =b$. It is not clear if this should really help here, though, but perhaps knowing these operators have a name might help in searching for references. Buena suerte! ;)",
"The operation you are considering is called an (elementary?) raising operator (a general raising operator is a product of perhaps repeated elementary ones). There is a partial order on $\\mathbb Z^n$ given by $a \\geqslant b$ iff for each $i$ we have $a_1+\\cdots+a_i\\geqslant b_1+\\cdots+b_i$, and raising operators are nondecreasing for this order."
] | 3 |
Science
| 0 |
348
|
math
|
Reversal of an Autoregressive Cauchy Markov Chain
|
Let $\mu_0 (dx)$ be the standard one-dimensional Cauchy distribution, i.e.
\begin{align} \mu_0 (dx) = \frac{1}{\pi} \frac{1}{1+x^2} dx. \end{align}
Suppose I fix $h \in [0, 1]$, and form a Markov chain $\\{X_n\\}_{n \geqslant 0}$ as follows:
1. At step $n$, I sample $Y_n \sim \mu_0$.
2. I then set $X_{n+1} = (1 - h) X_n + h Y_n$
It is not so hard to show that this chain admits $\mu_0$ as a stationary measure, as this essentially comes from the fact that Cauchy distribution is a stable distribution.
What I'm interested in is the reversal of this Markov chain. More precisely, if the chain I describe above uses the Markov kernel $q (x \to dy)$, I want to understand the Markov kernel $r$ such that
\begin{align} \mu_0 (dx) q (x \to dy) = \mu_0 (dy) r (y \to dx). \end{align}
Fortunately, all of the quantities involved have densities with respect to Lebesgue measure, and as such, I can write down what $r (y \to dx)$ is:
\begin{align} r (y \to dx) = \frac{1 + y^2}{\pi} \frac{1}{1 + x^2} \frac{h}{ h^2 + (y - (1 - h) x)^2} dx. \end{align}
My question is then: is there a simple, elegant way to draw exact samples from $r$?
I would highlight that this is not a purely algorithmic question; I'd _really_ like to understand what this reversal kernel $r$ is doing. A nice byproduct of that would then be that I could simulate from it easily.
For completeness, some of the `purely algorithmic' solutions I had considered were the following.
* I could try rejection sampling, and in principle this would work, but it wouldn't really give me insight into the nature of the Markov chain.
* I could try something like the inverse CDF method, but it seems to me that the CDF of $r$ is not particularly nice to work with. As such, I'd have to use e.g. Newton iterations to use this method, and I'd prefer to not have to do this.
|
https://math.stackexchange.com/questions/3014357/reversal-of-an-autoregressive-cauchy-markov-chain
|
[
"probability",
"markov-chains"
] | 10 | 2018-11-26T05:49:00 |
[
"The general Cauchy distribution $c_w(dx)=\\frac{b dx}{\\pi(b^2+(x-a)^2)}$ where $b>0$ and $w=a+ib$ has Fourier transform $e^{iwt}$ for $t>0.$ Therefore your Markov chain defined by $X_{n+1}=(1-h)X_n+hY_n$ where $Y_n\\sim c_i=\\mu_0$ is such that $X_n\\sim c_{w_n}$ where $w_{n+1}=(1-h)w_n+ih$, hence $w_n=(1-h)^nw_0+ih$. I will try to understand your question in terms of these $w$ and $c_w.$"
] | 1 |
Science
| 0 |
349
|
math
|
Extracting an (almost) independent large subset from a pairwise independent set of Bernoulli variables
|
Let $n>1$, and let $X_1,X_2, \ldots ,X_n$ be non-constant random variables with values in $\lbrace 0,1 \rbrace$. Let us say that a subset of variables $X_{i_1},X_{i_2}, \ldots,X_{i_d}$ is **complete** if the vector $\overrightarrow{X}=(X_{i_1},\ldots,X_{i_d})$ satisfies $P(\overrightarrow{X}=\overrightarrow{c})>0$ for any $\overrightarrow{c}\in \lbrace 0,1 \rbrace^d$.
Prove or find a counterexample : if $X_1,X_2, \ldots ,X_n$ are pairwise independent Bernoulli variables, then we may extract a complete subset of cardinality at least $t+1$, where $t$ is the largest integer satisfying $2^{t} \leq n$.
This is true for $n=3$ (and hence also true for $n$ between $3$ and $7$), as is shown in the main answer to that [MathOverflow question](https://mathoverflow.net/questions/66738/is-there-a-good-explanation-for-this-fact-on-pairwise-independent-variables). (That other [MathOverflow question](https://mathoverflow.net/questions/64973/sufficiently-random-sample) is also related, and provides several links)
If true, this result is sharp, as can be seen by the classical example of taking all arbitrary sums modulo 2 of an initial set of fully independent $t+1$ Bernoulli variables. This produces a set of pairwise independent $2^{t+1}-1$ variables, and where the maximal cardinality of a complete subset is $t+1$.
**Update 10/10/2012** : By induction, it would suffice to show the following : if $X_1, \ldots ,X_t$ is a fully independent set of $t$ Bernoulli variables and $X$ is another Bernoulli variable, such that the pair $(X_i,X)$ is independent for each $i$, then there are coefficients $\varepsilon_0,\varepsilon_1, \ldots ,\varepsilon_t$ in $\lbrace 0,1 \rbrace$ such that, if we put
$$ H=\Bigg\lbrace (x_1,\ldots,x_t,x) \in \lbrace 0,1 \rbrace ^{t+1} \Bigg| x=\varepsilon_0+\sum_{k=1}^{t}\varepsilon_kx_k \ {\sf mod} \ 2\Bigg\rbrace, \ \overrightarrow{X}=(X_{1},\ldots,X_{t},X) $$ then $P(\overrightarrow{X}=h)>0$ for any $h\in H$.
|
https://math.stackexchange.com/questions/208075/extracting-an-almost-independent-large-subset-from-a-pairwise-independent-set
|
[
"probability",
"combinatorics"
] | 10 | 2012-10-05T22:04:02 |
[] | 0 |
Science
| 0 |
350
|
math
|
How to prove this lemma related to Rolle's theorem
|
For any function $f$ denote by $Z(f)$ and $Z_o(f)$ the cardinalities of $f^{-1}(0)\cap[0,1]$ and $f^{-1}(0)\cap(0,1)$, respectively. Let $H=\\{f\in C^\infty(\mathbb{R}): \text{supp}(f) = [0,1]\\}$
From [this question](https://math.stackexchange.com/questions/664741) we have
**Lemma 1**
Let $q:x\mapsto(x-r)p(x)$ where $p\in H$ and $r\in\mathbb{R}$. Then $Z(q^{(n)})\geq Z(p^{(n-1)})+1$ for all $n\in\mathbb{N}$.
**Proof**
Note that $q^{(n)}(x) = n p^{(n-1)}(x) + (x-r)p^{(n)}(x)$. Hence $r$ is a root of $q^{(n)}$ if and only if it is a root of $p^{(n-1)}$. Moreover we have
$$\underbrace{(x-r)^{n-1}q^{(n)}(x)}_{\text{LHS}} = n (x-r)^{n-1} p^{(n-1)}(x) + (x-r)^np^{(n)}(x) = \underbrace{\frac{d}{dx}(x-r)^n p^{(n-1)}(x)}_{\text{RHS}}$$
If $q^{(n)}(r) = 0$ then $Z(q^{(n)}) - 2 = Z_o(\text{LHS}) = Z_o(\text{RHS}) \geq Z(p^{(n-1)}) - 1$.
If $q^{(n)}(r) \neq 0$ then $Z(q^{(n)}) - 1 = Z_o(\text{LHS}) = Z_o(\text{RHS}) \geq Z(p^{(n-1)})\quad\quad\quad\quad\text{q.e.d.}$
$\text{ }$
The following modification
**Lemma 2**
Let $q:x\mapsto(x-r)(x-\bar{r})p(x)$ where $p\in H$ and $r\in\mathbb{C}\setminus\mathbb{R}$. Then $Z(q^{(n)})\geq Z(p^{(n-2)})+2$ for all $n\in\mathbb{N}\setminus\\{1\\}$.
I tried to prove the same way by writing
$$\frac{j(x)k(x)}{(x-r)(x-\bar{r})}q^{(n)}(x) = \frac{d}{dx}\left(j(x)^2 k(x)\frac{d}{dx}j(x)^{-1}p^{(n-2)}(x)\right)$$ where $\frac{d}{dx}$ is differentiation along the real axis and
$j(x)=c_1(x-r)^{1-n}+c_2(x-\bar{r})^{1-n}$
$k(x) = c_3 ((x-r)(x-\bar{r}))^n$
$c_1,c_2,c_3\in\mathbb{C}$
The real and imaginary part of $j(x)^{-1}$ are proportional, and $j(x)k(x)\in \mathbb{R}$ when choosing $c_1 = e^{i d_1}\\\c_2=e^{i d_2}\\\c_3=e^{-i\cdot (d_1+d_2)/2}\\\d_1,d_2\in\mathbb{R}$
but even then I couldn't get Rolle's theorem to work unless $j(x)$ has no roots on $(0,1)$. I have looked at many functions/different $n$s and even when choosing $r$ such that $j(x)$ has such roots, it seems impossible to find a counterexample to the lemma.
How to prove the lemma?
Or maybe this simpler version which remains when removing some of the assumptions:
If $p$ is smooth with $p^{(n-2)}$ having $m$ roots on $[0,1]$, then how to prove that the $n$'th derivative of $x \mapsto p(x)(x-r)(x-\bar{r})$ has at least $m-2$ roots on $(0,1)$.
|
https://math.stackexchange.com/questions/2548926/how-to-prove-this-lemma-related-to-rolles-theorem
|
[
"real-analysis",
"derivatives",
"roots"
] | 15 | 2017-12-03T06:03:26 |
[
"@mucciolo The real part can be non-zero",
"In the last (simpler) lemma still $r$ purely imaginary?",
"@DavidSpeyer I'm interested in how to prove lemma 2 further than for those choices of $r$, $n$ and $p$ that implies that $j$ has no roots on $(0,1)$",
"@GerryMyerson Any function whose domain contains $[0,1]$ and whose codomain contains $\\{0\\}$",
"I take it \"for any function $f$\" means \"for any real-valued function $f$ of a real variable\"?",
"@WillFisher I mean the space of bump functions, but the other conditions implies the $c$ so it is redundant",
"What does the subscript $c$ signify in your notation $C^{\\infty}_c$?",
"In the linked question, $r$ is assumed not in $(0,1)$. Do you want some analogous hypothesis in Lemma 2?"
] | 8 |
Science
| 0 |
351
|
math
|
Existence of function satisfying $f(f'(x))=x$ almost everywhere
|
**My project is to Study the existence of a continuous function $f : \mathbb{R} \rightarrow \mathbb{R}$ differentiable almost everywhere satisfying $ f\circ f'(x)=x$ almost everywhere $x \in \mathbb{R}$**
I began the study by supposing $f\in C ^ 1(\mathbb{R}) $, I have shown that f does not exist.
After, I found some difficulties when we assume only f differentiable on $\mathbb{R}$, I had an answer using Darboux's theorem [Questions about the existence of a function](https://math.stackexchange.com/questions/3312572/questions-about-the-existence-of-a-function?noredirect=1#comment6815760_3312572).
Now, I want to attack the initial problem. Previous arguments do not work!
Do you have any suggestions for me?
|
https://math.stackexchange.com/questions/3313126/existence-of-function-satisfying-ffx-x-almost-everywhere
|
[
"real-analysis",
"functional-analysis",
"derivatives",
"functional-equations"
] | 13 | 2019-08-04T02:40:45 |
[
"Why this question is different from?$$f'(x)=f^{-1}(x)$$ which is solvable, see video",
"@ ibnAbu can explain your approach ?!",
"Convert your equation into a differential equation and seek out for a solution",
"@Jack D'Aurizio the link math.stackexchange.com/questions/3312572/… does not answer the question you asked me?",
"The question is not yet solved in mathoverflow.net/questions/337607/…",
"@Jack D'Aurizio in the above link, I explained this case thank you for saving my question.",
"@Kavi Rama Murthy Can u explain me why u voted against this question?"
] | 7 |
Science
| 0 |
352
|
math
|
Is there any example of a real-analytic approach to evaluate a definite integral (with an elementary integrand) whose value involves Lambert W?
|
I have never seen a real-analytic approach before to evaluate integrals of the form below $$\int_a^b\text{elementary function}(x)\,dx=\text{constant involving}\,W(\cdot)\,\text{in its simplest form}\tag1.$$
For instance, on MSE, all use the residue theorem:
* [$\int_{-\infty}^{\infty}{e^x+1\over (e^x-x+1)^2+\pi^2}\mathrm dx=\int_{-\infty}^{\infty}{e^x+1\over (e^x+x+1)^2+\pi^2}\mathrm dx=1$](https://math.stackexchange.com/questions/2113205/int-infty-inftyex1-over-ex-x12-pi2-mathrm-dx-int-infty?noredirect=1&lq=1)
* [Interesting integral related to the Omega Constant/Lambert W Function](https://math.stackexchange.com/questions/45745/interesting-integral-related-to-the-omega-constant-lambert-w-function?noredirect=1&lq=1)
* [Prove that $\int_0^\infty \frac{1+2\cos x+x\sin x}{1+2x\sin x +x^2}dx=\frac{\pi}{1+\Omega}$ where $\Omega e^\Omega=1$](https://math.stackexchange.com/questions/2331600/prove-that-int-0-infty-frac12-cos-xx-sin-x12x-sin-x-x2dx-frac-pi?noredirect=1&lq=1)
* [Proof for integral representation of Lambert W function](https://math.stackexchange.com/questions/3347447/proof-for-integral-representation-of-lambert-w-function)
* [Evaluate $\int_{0}^{\infty} \ln(1+\frac{2\cos x}{x^2} +\frac{1}{x^4}) \, dx$](https://math.stackexchange.com/questions/4433252/evaluate-int-0-infty-ln1-frac2-cos-xx2-frac1x4-dx)
And the same applies to some of the wider literature I have come across:
* [Stieltjes, Poisson and other integral representations for functions of Lambert W](https://arxiv.org/pdf/1103.5640.pdf)
* [An Integral Representation of the Lambert $W$ Function](https://arxiv.org/pdf/2012.02480.pdf) **Note:** the proof is real-analytic, but the very first line assumes the validity of an integral identity which was only proven using complex analysis (Hankel contour).
So, my question is this:
Does anyone know of a proof of an identity of the form in $(1)$ that involves only real analysis (i.e. does not assume the existence of $\sqrt{-1}$)?
|
https://math.stackexchange.com/questions/4501736/is-there-any-example-of-a-real-analytic-approach-to-evaluate-a-definite-integral
|
[
"real-analysis",
"integration",
"definite-integrals",
"lambert-w",
"elementary-functions"
] | 12 | 2022-07-28T02:40:43 |
[
"@ТymaGaidash No, the proof must not use the fact that $\\sqrt{-1}$ exists.",
"@StevenClark No (I'm looking for an analytical proof rather than numerical). In that example, I'm actually not referring to the end result $\\int_{-\\infty}^{\\infty}{e^x+1\\over (e^x-x+1)^2+\\pi^2}\\,dx=1$ but rather the lemma $\\int_{-\\infty}^{\\infty}\\frac{a^2\\,dx}{(e^x-ax-b)^2+(a\\pi)^2}=\\frac{1}{1+W\\left(\\frac{1}{a}e^{-b/a}\\right)}$ in Jack's answer, which he said was evaluated only through the residue theorem. To qualify as an answer here, an example would be to prove the latter identity using real analysis when, say, $a=2$ and $b=3$, since $W(e^{-3/2}/2)$ doesn't have an obvious simplification.",
"Do you consider numerical integration a real-analytic approach? Some of your examples can be evaluated via numerical integration, but some of your examples also simplify which seems inconsistent with your desire to find an example that doesn't simplify. Excluding results that simplify seems rather arbitrary assuming one can show the example is inherently related to the Lambert W function. Trivial cases such as $\\frac{W(10)}{W(10)}$ don't generally meet this criteria, but it seems to me $\\int\\limits_{-\\infty}^{\\infty}{e^x+1\\over (e^x-x+1)^2+\\pi^2}\\mathrm dx=1$ does meet this criteria.",
"@StevenClark No, I think as long as your $x$ does not obviously simplify to $ye^y$ for some elementary $y$ then it's accepted (like obvious manipulations of $W((\\sqrt2+1)e^{\\sqrt2+1})$. Since otherwise we'd be running into much deeper holes involving Schanuel's conjecture.",
"Does one also have to prove a result involving $W(x)$ has no closed form that does't involve the $W(x)$ function?",
"@StevenClark I think there is a bit of misunderstanding -- I'm trying to find a valid integral representation (elementary integrand) for values of a constant involving $W(\\cdot)$ such that it cannot be simplified to a constant that doesn't involve $W$. So $1/(1+W(2))$ would be valid, but constants like $W(-1/e)=-1$ or $\\sin W(3e^3)^2=\\sin9$ would not.",
"My example is less trivial than your example, and is $\\int\\limits_{\\infty }^1 e^{1-x}\\,dx$ not a valid integral representation of $W\\left(-\\frac{1}{e}\\right)$?",
"@StevenClark No, which is the reason I added \"in its simplest form\" (or \"in most simplified form\" before this edit) so that we don't have trivial cases like $\\int_0^12x\\,dx=1=W(10)/W(10)$.",
"Does $\\int\\limits_{\\infty }^1 e^{1-x}\\,dx=W\\left(-\\frac{1}{e}\\right)=-1$ count?"
] | 9 |
Science
| 0 |
353
|
math
|
Minimum area contained between measurable set and translate by $\lambda$: A strengthening of 2018 USA TSTST #9
|
Question
Given $\lambda\in\mathbb{R}^+$, what is the smallest possible $c$ for which, given any measurable region $\mathcal{P}$ in the plane with measure $1$, there always exists a vector $\mathbf{v}$ with magnitude $\lambda$ so that the area shared between $\mathcal{P}$ and its translate by $\mathbf{v}$ is at most $c$?
Background
On this year's USA TSTST (a test that determines a group of about 30 people to take selection tests for the following year's USA team to the International Math Olympiad), there was an algebra/geometry/combinatorics hybrid problem that I found interesting:
Show that there is an absolute constant $c < 1$ with the following property: whenever $\mathcal P$ is a polygon with area $1$ in the plane, one can translate it by a distance of $\frac{1}{100}$ in some direction to obtain a polygon $\mathcal Q$, for which the intersection of the interiors of $\mathcal P$ and $\mathcal Q$ has total area at most $c$.
Essentially every solution can be boiled down to the following:
Step 1. For a given vector $\mathbf{v}\in \mathbb{R}^2$, define $f(\mathbf{v})=\mu\big(\mathcal{P}\cap(\mathcal{P}+\mathbf{v})\big)$, where $\mu(\cdot)$ is the area of a region, and $\mathcal{P}+\mathbf{v}$ consists of the points in $\mathcal{P}$ translated by $\mathbf{v}$.
Step 2. Prove $f(\mathbf{u}+\mathbf{v})\geq f(\mathbf{u})+f(\mathbf{v})-1$ and generalize it to
$$1-f\left(\sum_{i=1}^N \mathbf{v}_i\right)\leq \sum_{i=1}^N \left(1-f\left(\mathbf{v}_i\right)\right).$$
Step 3. Define, for real $t$,
$$I_t=\int_{0\leq ||\mathbf{v}||\leq t} f(\mathbf{v})\ d^2\mathbf{v} = \int_{0\leq ||\mathbf{v}||\leq t}\int_{\mathbf{x}\in \mathcal{P}} \mathbf{1}_{\mathcal{P}}(\mathbf{x}+\mathbf{v})\ d^2\mathbf{x}\ d^2\mathbf{v}.$$
We have
\begin{align}
I_t&=\int_{\mathbf{x}\in \mathcal{P}} \int_{0\leq ||\mathbf{v}||\leq t}\mathbf{1}_{\mathcal{P}}(\mathbf{x}+\mathbf{v})\ d^2\mathbf{v}\ d^2\mathbf{x}\\
&=\int_{\mathbf{x}\in \mathcal{P}} \mu\left(\mathcal{P}\cap\big\{\mathbf{x}+\mathbf{v}\big|0\leq ||\mathbf{v}||\leq t\big\}\right)\ d^2\mathbf{x}\\
&\leq\int_{\mathbf{x}\in \mathcal{P}} 1\ d^2\mathbf{x} = 1.
\end{align}
So, there must exist some $\mathbf{v}$ with $0\leq ||\mathbf{v}||\leq t$ that satisfies
$$f(\mathbf{v})\leq \frac{1}{\pi t^2}.$$
Step 4. Write this vector as
$$\mathbf{v}=\sum_{i=1}^{\lceil 100t\rceil}\mathbf{u}_i$$
where $||\mathbf{u}_i||=1/100$.
Step 5. We now have
$$1-\frac{1}{\pi t^2}\leq 1-f(\mathbf{v})\leq \sum_{i=1}^N (1-f(\mathbf{u}_i)),$$
so for some $i$ we must have
$$1-f(\mathbf{u}_i) \geq \frac{1}{\lceil 100t\rceil}\left(1-\frac{1}{\pi t^2}\right)$$
$$f(\mathbf{u}_i) \leq 1-\frac{1}{\lceil 100t\rceil}\left(1-\frac{1}{\pi t^2}\right).$$
As long as $t^2>1/\pi$, this gives us a working value of $c<1$. It is minimized at $t=0.98$, which gives
$$c\approx 0.99318.$$
More generally, if $\lambda$ is the length of our translate, this is minimized at the nearest integer multiple of $\lambda$ to $\sqrt{3/\pi}$, which gives
$$c\approx 1-2\lambda\sqrt{\frac{\pi}{27}}.$$
In particular, for large enough $\lambda$, this bound does nothing.
Progress
I've been able to improve the bound on $c$ given slightly in the following manner:
Assume, for all $||\mathbf{u}||=\lambda$, that $f(\mathbf{u})>1-\epsilon$. Then, for all vectors $\mathbf{v}$ of length $\leq n\lambda$, by writing $\mathbf{v}=\sum_{i=1}^n \mathbf{u}_i$ with $||\mathbf{u}_i||=\lambda$, we must have
$$1-f(\mathbf{v})\leq n\epsilon\implies f(\mathbf{v})\geq 1-n\epsilon.$$
Using (almost) our same integral as earlier and setting $t=N\lambda$, we have
$$\int_{\lambda< ||\mathbf{v}||\leq t} f(\mathbf{v})\ d^2\mathbf{v}\leq 1.$$
(we cannot start the integral at $0$ as vectors with magnitude $<\lambda$ cannot be represented as the sum of one vector with magnitude $\lambda$). However,
\begin{align}
\int_{\lambda \leq ||\mathbf{v}||\leq N\lambda} f(\mathbf{v})\ d^2\mathbf{v}
&=
\sum_{k=2}^N \int_{(k-1)\lambda< ||\mathbf{v}|| \leq k\lambda} f(\mathbf{v})\ d^2\mathbf{v}\\
&\geq
\sum_{k=2}^N \int_{(k-1)\lambda< ||\mathbf{v}|| \leq k\lambda} 1-k\epsilon\ d^2\mathbf{v}\\
&=
\pi\lambda^2\sum_{k=2}^N (2k-1)(1-k\epsilon)\\
&=
\pi\lambda^2\frac{N-1}{6}\left(6N+6-\epsilon\left(4N^2 + 7N + 6\right)\right),
\end{align}
so
$$\pi\lambda^2\frac{N-1}{6}\left(6N+6-\epsilon\left(4N^2 + 7N + 6\right)\right)\leq 1$$
$$N+1-\epsilon\left(\frac{4N^2 + 7N + 6}{6}\right)\leq \frac{1}{\pi(N-1)\lambda^2}$$
$$N+1- \frac{1}{\pi(N-1)\lambda^2}\leq \epsilon\left(\frac{4N^2 + 7N + 6}{6}\right)$$
$$\frac{6\left(\pi\left(N^2-1\right)\lambda^2- 1\right)}{\left(4N^2 + 7N + 6\right)\left(\pi(N-1)\lambda^2\right)}\leq \epsilon$$
$$1-\frac{6\left(\pi\left(N^2-1\right)\lambda^2- 1\right)}{\left(4N^2 + 7N + 6\right)\left(\pi(N-1)\lambda^2\right)}\geq 1-\epsilon.$$
Minimizing $N$ gives you about $0.9898$ for $\lambda=1/100$.
On the other hand, I've also been trying to find constructions that give a large value of $c$. I haven't come up with any great ones - the best I have is a circle of radius $\sqrt{\frac{1}{\pi}}$ which gives you a $c$ of about 0.9887. For larger $\lambda$ the best I can think of is something like a "star" with large radius and very many thin "prongs," but I haven't calculated the asymptotics on it yet.
I believe the bound given in step 2 of the solution is sharp iff
$$(\mathcal{P}+\mathbf{u})\cap(\mathcal{P}+\mathbf{v})\subseteq\mathcal{P}\subseteq(\mathcal{P}+\mathbf{u})\cup(\mathcal{P}+\mathbf{v}),$$
but all attempts I've made to find measurable sets for which this is true or nearly true for small vectors have failed. So I'm stuck.
Anyone have any ideas, either on sharper upper bounds on $c$ or a sharper $\mathcal{P}$?
|
https://math.stackexchange.com/questions/2844749
|
[
"measure-theory",
"lebesgue-measure",
"geometric-measure-theory"
] | 18 | 2018-07-08T09:20:30 |
[
"Post in mathoverflow",
"@TakahiroWaki Can you formalize that?",
"Original problem can be solved, the value that half of circumference multiple 1/100 is upperbound. Thinned shape give a small c, maximum case would be about round and small shape, that's circle.",
"@mathworker21 You're right; I missed that. I've edited the question accordingly.",
"The question at the start is phrased poorly. You should reorder like \"Given lambda, what is the smallest possible c>o so that for any P ....\". The point is that c depends on lambda and not P."
] | 0 |
Science
| 0 |
354
|
math
|
For $n\in\mathbb{N}^+$, $\Re(s)>0$, evaluate $\int_{0}^{\infty}\sin\left(2\pi ne^{x}\right)\left[\frac{s}{e^{sx}-1}-\frac{1}{x}\right]dx$
|
Let $n$ be a positive integer, and $s\in \mathbb{C}\;,\Re(s)>0$. I want to compute the integral : $$\int_{0}^{\infty}\sin\left(2\pi ne^{x}\right)\left[\frac{s}{e^{sx}-1}-\frac{1}{x}\right]dx$$
I tried using the integral presentation found [here](http://functions.wolfram.com/ElementaryFunctions/Sin/07/02/) :
$$\sin(2\pi ne^{x})=\frac{\sqrt{\pi}}{2\pi i }\int_{\gamma-i\infty}^{\gamma+i\infty}\frac{\Gamma(z)}{\Gamma\left(\frac{3}{2}-z \right )}\left(\pi n \right )^{-2z+1}e^{-(2z-1)x}dz\;\;\;\;\;0<\gamma<1$$ in conjunction with : $$r+\log(r)+\psi\left(\frac{1}{r}\right)=-\int_{0}^{\infty}e^{-y}\left(\frac{r}{e^{ry}-1}-\frac{1}{y}\right)dy$$ Where $\psi\left(\cdot\right)$ is the digamma function. But that made the problem even more difficult. Any help is highly appreciated.
**EDIT** :
Using the Fourier reciprocity : $$\frac{s}{e^{sx}-1}-\frac{1}{x}=2\int_{0}^{\infty}\left(\frac{1}{e^{2\pi \theta/s}-1}-\frac{s}{2\pi \theta} \right )\sin(x\theta)d\theta$$ Our integral reads : $$2\int_{0}^{\infty} g(\theta)\left(\frac{1}{e^{2\pi \theta/s}-1}-\frac{s}{2\pi \theta} \right )d\theta$$ Where : $$g(\theta)=\int_{0}^{\infty}\sin(2\pi ne^{x})\sin(x\theta)dx$$ $$=\frac{\sqrt{\pi}}{2\pi i }\int_{\gamma-i\infty}^{\gamma+i\infty}\frac{\Gamma(z)}{\Gamma\left(\frac{3}{2}-z \right )}\left(\pi n \right )^{-2z+1}\frac{\theta}{\theta^{2}+(2z-1)^{2}}dz$$
|
https://math.stackexchange.com/questions/3551521/for-n-in-mathbbn-res0-evaluate-int-0-infty-sin-left2-pi-ne
|
[
"real-analysis",
"complex-analysis",
"definite-integrals",
"residue-calculus"
] | 11 | 2020-02-18T09:35:42 |
[
"Using change of variable $y=e^x$ plus noting that $\\left[\\frac{s}{e^{s x}-1}-\\frac{1}{x}\\right]=\\left[\\frac{s \\coth \\left(\\frac{s x}{2}\\right)}{2}-\\frac{1}{x}-\\frac{s}{2}\\right]$ I think your integral can be written as $$I=\\int_1^{\\infty } \\frac{\\sin (2 \\pi n y)}{y} \\left(\\frac{s \\coth \\left(\\frac{1}{2} s \\log (y)\\right)}{2 }-\\frac{1}{ \\log (y)}-\\frac{s}{2 }\\right) \\, dy$$ The integral of the $-\\frac{s}{2}$ term involving just the sine integral is the largest or principal term. When integrated the term is $\\left[ -\\frac{1}{2} s \\left(\\frac{\\pi }{2}-\\text{Si}(2 n \\pi )\\right) \\right]$.",
"I am interested in an exact from",
"Are you interested in any sort of asymptotics of $s$? Or do you need an exact closed form?",
"Yes your right Sorry",
"Are you sure you evaluated the integral with $\\sin(2\\pi n e^{x})$ not $\\sin(2\\pi nx)$ ?",
"A quick and dirty conjectural result found using Mathematica and tested for a few values of $n$ and real $s$ is $$I=\\frac{1}{2} \\pi \\left(\\coth \\left(\\frac{2 \\pi ^2 n}{s}\\right)-1\\right)-\\frac{s}{4 \\pi n}$$",
"Did you consider numeric approximation (for fixed values of $s,n$) to get an idea? If you evaluate the integral from $0$ to some $n$ this integral should be well approximated even for relatively small $n$ because for large $x$ the sin term will just average out to zero.",
"It is esoteric indeed. This integral came up in the study of a certain number-theoretic function. The function between the parentheses is self reciprocal wrt the Fourier sine transform, and can be used to prove the transformation formula of the Dedekind eta function, when the sine factor is $\\sin(2\\pi n x)$. The problem above is a little bit different, but concerns a closely related function.",
"What context did this come up in? This seems like a very esoteric problem.",
"$\\int_0^\\infty\\sin\\left(2\\pi ne^{x}\\right)\\left[\\frac{s}{e^{sx}-1}-\\frac{1}{x}\\right]dx$ is the incomplete version of $\\int_{-\\infty}^\\infty\\sin\\left(2\\pi ne^{x}\\right)\\left[\\frac{s}{e^{sx}-1}-\\frac{1}{x}\\right]dx$ which has way more chances to be evaluated, why do you need the former",
"You meant $\\int_{-\\infty}^\\infty\\sin\\left(2\\pi ne^{x}\\right)\\left[\\frac{s}{e^{sx}-1}-\\frac{1}{x}\\right]dx$"
] | 11 |
Science
| 0 |
355
|
math
|
The sum of eigenvalues of integral operator $S(f)(x)=\int_{\mathcal{X}} k(x,y)f(y)d\mu(y)$ is given by $\int_{\mathcal{X}} k(x,x) d\mu(x)$?
|
**Setup:** Let $(\mathcal{X},d_{\mathcal{X}})$ and $(\mathcal{Y},d_{\mathcal{Y}})$ be two separable metric spaces. Let $M^1(\mathcal{X})$ be the space of Borel probability measures on $\mathcal{X}$ with finite first moment, i.e. a Borel probability measure $\mu$ on $\mathcal{X}$ is in $M^1(\mathcal{X})$ if $\int d_{\mathcal{X}}(x,o) d\mu(x)<\infty$ for any $o\in\mathcal{X}$. The space $M^1(\mathcal{Y})$ is defined in similar fashion.
Fix $\mu\in M^1(\mathcal{X})$ and $\nu\in M^1(\mathcal{Y})$ and define $$ d_\mu(x_1,x_2)= d_{\mathcal{X}}(x_1,x_2) -\int d_{\mathcal{X}}(x_1,x)\, d\mu(x) - \int d_{\mathcal{X}}(x_2,x)\, d\mu(x) + \int d_{\mathcal{X}}(x,x')\, d\mu^2(x,x'), $$ and a similar definition of $d_\nu:\mathcal{Y}\times \mathcal{Y}\to\mathbb{R}$.
Now let $S:L^2(\mathcal{X}\times \mathcal{Y},\mathcal{B}(\mathcal{X})\otimes \mathcal{B}(\mathcal{Y}),\mu\times \nu) \to L^2(\mathcal{X}\times \mathcal{Y},\mathcal{B}(\mathcal{X})\otimes \mathcal{B}(\mathcal{Y}),\mu\times \nu),$ be a Hilbert-Schmidt operator given by $$ S(f)(x,y) = \int d_\mu(x,x')d_\nu(y,y') f(x',y') d\mu\times \nu(x',y'). $$ and let $\\{\lambda_i\\}_{i\geq 1}$ denote the non-zero eigenvalues of $S$ repeated according to multiplicity.
**Question:** How do I prove the following identity: $$\sum_{i=1}^\infty\lambda_i=\int d_\mu(x,x)d_\nu(y,y) \, d\mu\times \nu(x,y).$$ I tried but failed to show that $S$ is of trace class, since they under certain conditions (which I also can't verify in this setup) satisfy that $$ Trace(S)=\int d_\mu(x,x)d_\nu(y,y) \, d\mu\times \nu(x,y), $$ which yields the result since $Trace(S)=\sum_{i=1}^\infty\lambda_i$ (if it were of trace class).
_The identity $\sum_{i=1}^\infty\lambda_i=\int d_\mu(x,x)d_\nu(y,y) \, d\mu\times \nu(x,y)$ is stated in [Distance covariance in metric spaces by Russell Lyons (2013) Theorem 2.7](https://projecteuclid.org/euclid.aop/1378991840), without proof, so my approach with traces is only an idea._
_If another way of proving the identity appears or if there is a counterexample, that would more than satisfy my needs. In the case of a counterexample i would very much appreciate stronger initial conditions rendering the identity true._
Please bear in mind that i am a novice in the theory of operators and trace class operators (only went down this road to explain the equality above), so references would be much appriciated.
**Update:** A counterexample to the operator being of trace-class is presented by Russell Lyons in the errata to the mentioned paper. Furthermore a proof that the formula holds and that the operator is of traceclass, whenever the marginal spaces posses additional nice proporties (isometric embeddability into hilbert spaces), is also presented in this errata.
|
https://math.stackexchange.com/questions/2035989/the-sum-of-eigenvalues-of-integral-operator-sfx-int-mathcalx-kx-yf
|
[
"functional-analysis",
"statistics",
"operator-theory",
"spectral-theory",
"trace"
] | 9 | 2016-11-29T08:56:58 |
[
"@Renart I think there are more subtle problems. For example $y\\mapsto K(x,y) = \\sum_{n}e_n(y) \\int e_n(z) K(x,z)dz,$ only in $L^2(\\mathbb{R})$ for each fixed $x$, which by definition means that for any $x$ $\\|K(x,\\cdot)- \\sum_{n=1}^k e_n(\\cdot) \\int e_n(z) K(x,z)dz\\|_2\\to_k 0.$ But in order to say that $\\int \\sum_{n=1}^\\infty e_n(x)\\int K(x,y)e_n(y)dy dx= \\int K(x,x) dx$ we need stronger convergence. Anyways, all this assumes that the operator is of trace class, which i haven't even verified.",
"Well i don't have tried it myself but... From what i read the answer of jonhatan doesn't really use the fact that it's on $\\mathbf R$. The only thing that need justification is the permutation of a sum and an integral at the very end. Fubini works pretty independently of the space you're concidering.",
"@Renart Yes that is indeed one of the questions that i have read, suggesting the trace approach. Though this is a integral kernel over $\\mathbb{R}$, and i'm quite sure that problems arise when integrating over arbitrary metric spaces.",
"math.stackexchange.com/questions/185587/…",
"If not possible in the above setup, can the equality be shown for when $\\mathcal{X}$ and $\\mathcal{Y}$ are separable Hilbert spaces?"
] | 5 |
Science
| 0 |
356
|
math
|
Spectacular failure of Lebesgue differentiation for rectangles
|
Let $\mathcal{R}$ be the set of rectangles in the plane and, given $f \in L^1$ let $$ f^*(x) = \sup_{x \in R \in \mathcal{R}} \frac{1}{ \lvert R \rvert} \int_R \lvert \, f \,\rvert $$ as defined in [this question](https://math.stackexchange.com/questions/641399/maximal-functions-where-weak-type-inequality-fails). You can show that the weak-type inequality fails for this operator, that is, there is no constant $A$ such that $$ m(\\{ f^* > \alpha \\}) < \frac{A}{\alpha} \lVert \, f \, \rVert_1 $$ for each integrable $f$. From Stein and Shakarchi's book on real analysis I'd like to show the following, much stronger claim: there is an integrable $f$ such that
$$ \limsup_{\text{diam}(R) \to 0} \frac{1}{|\,R\,|} \int_R \lvert \, f \, \rvert = \infty \text{ a.e.} $$ where diam, of course, is the diameter. According to the book this "should" follow from the failure of the weak type inequality, but I don't really know how to do it.
I was thinking something like the following, but it failed. It suffices to find and $f$ such that the desired conclusion holds only on a set of positive measure, for we can translate around and get the desired effect. So I wanted to show that if the $\limsup$ expression is finite everywhere, the weak-type inequality holds, for that specific function. That would get me what I want. However I got nowhere with finding a function for which the weak-type inequality fails, sadly.
Edit: Whoops, this is actually very hard. In its most general form it relies on a result found on page 441 of Stein's book Harmonic Analysis: Real-Variable Methods, Orthogonality, and Oscillatory Integrals. Hopefully this specific case can be had more easily, though?
|
https://math.stackexchange.com/questions/1854652/spectacular-failure-of-lebesgue-differentiation-for-rectangles
|
[
"real-analysis",
"functional-analysis",
"lebesgue-integral"
] | 11 | 2016-07-09T20:30:12 |
[
"Thanks for referring to Stein's proof. I did struggle with (15), until I realized that $|E_k|$ is bounded by the size of $B$ so $||g'_k|| \\rightarrow 0$ as $k\\rightarrow\\infty$.",
"Have you studied Stein's proof? It is quite straightforward and most of the work is in making formal your intuition that you can find a contradiction on a compact set and translate it around.",
"Banach-Tarski 2..."
] | 3 |
Science
| 0 |
357
|
math
|
Is there an open subset $A$ of $[0,1]^2$ with measure $>\frac{1}{100}$ that satisfies this property?
|
Can we find for any given $\varepsilon>0$ an open subset $A\subseteq[0,1]^2$ with measure $>\frac{1}{100}$ such that, for any smooth curve $\gamma:[0,1]\to\mathbb{R}^2$ of length $1$, the set $\gamma+A=\\{\gamma(t)+a;t\in[0,1],a\in A\\}$ does not contain any balls of radius $\varepsilon$?
I wouldn't mind changing the $\frac{1}{100}$ for any other positive constant. Also, I ask about smooth curves but it may make more sense to consider in general $1$-Lipschitz functions $\gamma:[0,1]\to[0,1]^2$.
For context, a positive answer to this question could be useful for [this other question](https://mathoverflow.net/questions/420897/what-is-the-smallest-size-of-a-shape-in-which-all-fixed-n-polyominos-can-fit/421175#421175). But the question is also interesting in itself, of course.
|
https://math.stackexchange.com/questions/4445539/is-there-an-open-subset-a-of-0-12-with-measure-frac1100-that-sati
|
[
"measure-theory",
"multivariable-calculus",
"geometric-measure-theory"
] | 16 | 2022-05-07T17:12:26 |
[
"Now posted to MO, mathoverflow.net/questions/422493/…",
"Yes, I was asking having in mind the negative answer (baseless belief as well!)",
"Oh and if you mean that if the curves $\\gamma$ have length $10$ the answer is negative then yes, I am interested (for some reason I think that the answer of these problems is going to be positive but it's a baseless belief)",
"The case of $\\gamma$ having length $10$ would imply the question as above, because any smooth curve of length $1$ is a subset of a smooth curve of length $10$. But any ideas of how to solve problems similar to the one above are welcome. For example I would also be interested in the case of the curves $\\gamma$ having length $\\leq\\frac{1}{2}$",
"Would you be interested in the case when $\\gamma$ has length, say, 10?",
"Btw, I am not sure why the \"multivariable calculus\" tag is more appropiate than the \"recreational mathematics\" one on this question. This is not part of any mathematics curriculum that I know of",
"Thanks, that's what I missed: the restriction on length.",
"@mihaild I don't understand very well, your curve $\\gamma$ is $[0,1]\\to\\mathbb{R}$, not $[0,1]\\to\\mathbb{R}^2$, and I don't think it has length $1$",
"If $A$ is open, it contains an open ball of radius $x$, now if we take $\\gamma(t) = \\sin\\frac{20 \\pi t}{x}$, $\\gamma +A$ fills unit square (as moving even just this ball along $\\gamma$ fills it). I guess something is wrong with this reasoning, but I can't find what."
] | 9 |
Science
| 0 |
358
|
math
|
Closed form expression for $\sqrt{4+\sqrt[3]{4+\sqrt[4]{4+\dots}}}$
|
Since $e=\sum_{n=0}^\infty\frac{1}{n!}=1+1+\frac12(1+\frac13(1+\frac14(1+\dots)))$, we have
$$4^{e-2}=\sqrt{4\cdot\sqrt[3]{4\cdot\sqrt[4]{4\cdots}}}$$
Is there however a nice way to express the radical in the title too?
|
https://math.stackexchange.com/questions/662563/closed-form-expression-for-sqrt4-sqrt34-sqrt44-dots
|
[
"closed-form",
"radicals",
"nested-radicals"
] | 16 | 2014-02-03T13:49:36 |
[
"math.stackexchange.com/questions/875203/… math.stackexchange.com/questions/1782032/… math.stackexchange.com/questions/1843974/…",
"It might be interesting to you that by taking $4$ out of the first root, we obtain another form of this radical: $$2\\sqrt{1+\\sqrt[3]{\\frac{1}{2^4}+\\sqrt[4]{\\frac{1}{2^{22}}+\\dots\\sqrt[k]{\\frac{1}{2^{k!-2}}+\\dots}}}} \\approx 2.401615526...$$ You just have to carefully take $1/4$ under each root, moving further and further",
"I have to agree, you probably won't find an answer to this. Noting that the power of the radicals changes (or if the values inside change) makes me want to say there is no closed form.",
"@BarryCipra Well that's unfortunate... these things aren't perfect. On the other hand, a constant like $4^e$ seems pretty rare, I don't think I've ever gotten anything quite like it when evaluating an integral, say. But it does seem weird that it wouldn't be detected.",
"@BrunoJoyal, I agree it's a good idea, but with one caveat: I ran $4^{e-2}\\approx2.70675377667$ through an ISC, just as a \"sanity check,\" and came up empty.",
"I would try to evaluate it to high precision and run the resulting approximation through a reverse symbolic calculator to see if anything comes up."
] | 6 |
Science
| 0 |
359
|
math
|
Evaluating a Polynomic-Trigonometric-Hyperbolic Integral
|
Within [this AoPS thread](https://artofproblemsolving.com/community/c7h1746945p11375644) it is asked to evaluate the following integral
> $$\mathfrak I~=~\int_0^\infty \frac{x\sin x}{\cos x+\cosh^2 x}\mathrm dx\tag1$$
In order to be precise there is also a possible closed-form conjectured which is given by
> $$\mathfrak I~=~G-\frac12\tag2$$
But as it is pointed out within the linked thread this seems to be only a reasonable approximation off after the $5$th decimal digit.
I have to admit that it is highly improbable that there exists a nice looking closed-form for $(1)$ since the integrand involves polynomials, trigonometric aswell as hyperbolic functions. I am not even sure how to get started, i.e. which substitution to choose or which technique at all to start with.
A related, but perhaps more handable integral, would be the following
> $$\mathfrak J~=~\int_0^\infty \frac{\sin x}{\cos x+\cosh^2 x}\mathrm dx\tag{1$'$}$$
Out of experience I could imagine that $(1')$ may has a closed-form in terms of known constants $($or series$)$ since it only contains the two closely connected trigonometric and hyperbolic functions.
> Is it in fact possible to deduce a closed-form for $(1)$ and $(1')$? For myself I cannot offer an approach since everything I tried was not helpful at all hence I was not even able to perform one or two steps in order to simplify the given integrals. I would be glad to see a full solution or even attempts in evaluating $(1)$ and $(1')$ since I have no idea how to deal with such integrands.
Thanks in advance!
**EDIT**
Out of pure chance I just stumbled upon [a related MSE question](https://math.stackexchange.com/questions/1798467/why-does-the-hard-looking-integral-int-0-infty-fracx-sin2x-coshx/1798559#1798559) dealing with the integral
$$\int_0^\infty\frac{x\sin^2x}{\cosh x+\cos x}\mathrm dx=1$$
Which on the other hand motivates me to believe that there may be a closed-form for $(1)$.
|
https://math.stackexchange.com/questions/3154378/evaluating-a-polynomic-trigonometric-hyperbolic-integral
|
[
"integration",
"definite-integrals",
"closed-form"
] | 14 | 2019-03-19T10:33:16 |
[
"@GEdgar I have problems with applying the residue theorem for complicated integrals correctly. Are you here able to do so?",
"@Zacky I am aware of this list but has not thought about searching within for the given integral. As you already mentioned this list is unreliable but cotains some \"good\" approximations but way to many completely wrong values.",
"The integral also appears within this list: en.wikiversity.org/wiki/User:Integrals123 (I typed in ctrl + F cosh and scrolled down till the $54$-th integral), which unfortunately has many wrong answers since it was done with Inverse Symbolic calculator and that result kinda played us. But yes a closed form would be interesting even though it's not $G-\\frac12$.",
"Note. Since the integrand is even, we ave $$2 \\mathfrak I~=~\\int_{-\\infty}^\\infty \\frac{x\\sin x}{\\cos x+\\cosh^2 x}\\mathrm dx$$ and there is a chance this could be done using a contour in the complex plane."
] | 4 |
Science
| 0 |
360
|
math
|
How to generalize Reshetnikov's $\arg\,B\left(\frac{-5+8\,\sqrt{-11}}{27};\,\frac12,\frac13\right)=\frac\pi3$?
|
Given the [argument](https://en.wikipedia.org/wiki/Argument_\(complex_analysis\)) $\arg(z)$, we can observe that for $k=1,2,3$, $$\arg z_1 = \frac{k\,\pi}3, \quad z_1 = \left(\tfrac{1+\sqrt{-3}}{2}\right)^k\qquad\tag1$$ $$\qquad \arg z_2=\frac{k\,\pi}3, \quad z_2 = \left( B\Big(\color{blue}{\tfrac{-5+8\,\sqrt{-11}}{27}};\,\tfrac12,\tfrac13\Big)\right)^k \tag2$$ and _incomplete beta function_ $B(z;a,b)$. [The second](https://math.stackexchange.com/questions/687567/closed-form-for-2f-1-left-frac12-frac23-frac32-frac8-sqrt11-i-5) with $k=1$ is by V. Reshetnikov.
But that cannot be an isolated result. Note that $z_1^3=-1$ while the complex number $z_2$ has a _real_ cube $z_2^3 \approx 6.1319080509$ with no known closed-form.
Reshetnikov states (without giving details) that $(2)$ is equivalent to, $$B\left(\frac19;\,\frac16,\frac13\right) = \frac12\,B\left(\frac16,\frac13\right) =\frac{1}{2\sqrt{\pi}}\,\Gamma\left(\frac16\right)\Gamma\left(\frac13\right)\tag3$$ Thus, $(3)$ is related to, $$I\left(\color{blue}{\frac19};\,\frac16,\frac13\right)=\frac12 \tag4$$ with _regularized beta function_ $I(z;a,b)$. The equation $$I\left(z;\,\frac16,\frac13\right)=\frac1n \tag5$$ to find some algebraic $z$ for a given integer $n$ is quite easy to solve. For example, the smallest real root of, $$-1 + 99 z - 243 z^2 + 81 z^3= 0\tag6$$ will yield $n=3$. More specifically, $$I\left(\frac19\Big(1-4\sin\big(\tfrac{\pi}{18}\big)\Big)^2;\,\frac16,\frac13 \right)=\frac13\quad\quad\tag7$$ $$\quad I\left(\frac13\Big(1-\frac{\sqrt2}{\sqrt[4]3}\Big)^2;\,\frac16,\frac13\right) =\frac14\quad\tag8$$ and so on.
> **Q:** But how do we get from $\color{blue}{\frac{-5+8\,\sqrt{-11}}{27}}$ of $(2)$ to $\color{blue}{\frac19}$ of $(4)$? (I assume a hypergeometric transformation is involved?)
If the answer can be found, then perhaps we can use an expression $P(z)$ of a root $z$ of $(6)$ to find another, $$\arg z_3=\frac{\beta\,\pi}3,\quad z_3=B\Big( P(z);\,\tfrac12,\tfrac13\Big)\quad\tag7$$ where $\beta$ is a rational/algebraic number?
|
https://math.stackexchange.com/questions/2053470/how-to-generalize-reshetnikovs-arg-b-left-frac-58-sqrt-1127-fra
|
[
"complex-analysis",
"special-functions",
"closed-form",
"pi",
"beta-function"
] | 13 | 2016-12-10T21:52:11 |
[
"@J.M.: I rephrased it for clarity. :)",
"The exponent in $(2)$ should be inside $\\arg$, no?"
] | 2 |
Science
| 0 |
361
|
math
|
Covering number/Metric Entropy of the unit ball with respect to Mahalanobis distance
|
Let $B$ denote the unit ball on $\mathbb{R}^d$ and $N(\epsilon, B, d)$ be the cardinality of the smallest $\epsilon$-cover of $B$. An epsilon cover is a set $T \subset B$ such that for any $x \in B$, there is a $t \in T$ with $d(t,x) \le \epsilon$. See for example [here](https://www.stat.berkeley.edu/%7Ebartlett/courses/2013spring-stat210b/notes/12notes.pdf). $N$ is referred to as the covering number, and $\log N$ is the Metric entropy.
Consider the following result: let $\|\cdot\|$ be a norm on $\mathbb{R}^d$ then $$ \frac{1}{\epsilon^d} \le N(\epsilon, B, \|\cdot\|) \le \left (1+\frac{2}{\epsilon} \right)^d. $$
I would like to know if there are bounds on the covering numbers that are dimension free when we choose the metric to be the Mahalanobis distance $d_S(x,y) = \|S^{-1/2}(x-y)\|_2$ for some positive definite covariance matrix $S$. Are there results along the lines of:
$$ \frac{1}{\epsilon^{f(S)}} \le N(\epsilon, B, d_S) \le \left (1+\frac{2}{\epsilon} \right)^{f(S)}. $$ where $f(S)$ is some quantity depending on $S$? An example I have in mind is when $S$ is diagonal with quickly decaying diagonal elements, e.g. $S_{ii} = i^{-2}$.
|
https://math.stackexchange.com/questions/4707017/covering-number-metric-entropy-of-the-unit-ball-with-respect-to-mahalanobis-dist
|
[
"real-analysis",
"combinatorics",
"geometry",
"analysis",
"statistics"
] | 10 | 2023-05-26T07:18:57 |
[] | 0 |
Science
| 0 |
362
|
math
|
Expectation of maximum of minimums of permutations
|
Assume $n$ random permutations $\pi_1,\pi_2,\ldots,\pi_n: \lbrace 1,2,\ldots,m \rbrace \rightarrow \lbrace 1,2,\ldots,m \rbrace$. Let $X_i = \min(\pi_1(i),\pi_2(i),\ldots,\pi_n(i))$ and $Y = \max(X_1, X_2, \ldots, X_m)$. What is the expectation of $Y$, $E(Y)$, as function of $n$? An upper bound approximation for $E(Y)$ would also be very helpful.
Obviously, we have $E(Y)=m$ for $n=1$ and $E(Y)=1$ for $n\rightarrow\infty$.
I know that the distribution of $X_i$ is given by $P(X_i\leq k) = 1 - \left(1-\frac{k}{m}\right)^n$. However, since $X_1,X_2,\ldots,X_m$ are not independent, it is not possible to get the distribution of $Y$ by $P(Y \leq k) = P(X_1 \leq k \wedge X_2 \leq k \wedge \ldots \wedge X_m\leq k) \neq P(X_1 \leq k) \cdot P(X_2 \leq k) \cdot\ldots \cdot P(X_m \leq k)$.
|
https://math.stackexchange.com/questions/2257782/expectation-of-maximum-of-minimums-of-permutations
|
[
"probability",
"statistics",
"permutations"
] | 7 | 2017-04-29T08:56:32 |
[] | 0 |
Science
| 0 |
363
|
math
|
Conditional expectation continuous in the conditioning argument?
|
Let $X$ and $Y$ be random vectors defined on a common probability space. $X$ takes values in a finite-dimensional space $\mathcal{X} \subset \mathbb{R}^p$, while $Y$ takes values in $\mathbb{R}$. The conditional expectation $E(Y \mid X)$ is then a random variable that is uniquely defined up to null sets.
I am seeking a set of sufficient conditions on the joint distribution of $(X,Y)$ for the following statement to be true:
Given any point $x_0 \in \mathcal{X}$, there exists a function $f \colon \mathcal{X} \to \mathbb{R}$ such that (i) $f(X)$ is a version of $E(Y \mid X)$, and (ii) $f(\cdot)$ is continuous at $x_0$.
Obviously, the continuity part is the non-trivial one. By Lusin's theorem, any measurable function (such as any version of the conditional expectation function) is "nearly continuous", but this is not quite enough for me.
Ideally the sufficient conditions for the above statement would not involve restrictions on the densities or conditional densities of $X$ and $Y$. The problem that motivates this question has a complicated geometry, so it is difficult to characterize densities with respect to fixed dominating measures.
If you require more structure to the problem (but ideally the question would be answered in more generality), you may assume: $Y = g(A)$ for a continuous function $g(\cdot)$, $X = A + B$, and the random vectors $A$ and $B$ are independent. However, $A$ and $B$ may concentrate on different subspaces of $\mathbb{R}^p$, each of which is a complicated manifold.
Thank you in advance for your time!
|
https://math.stackexchange.com/questions/1896209
|
[
"probability-theory",
"measure-theory",
"continuity",
"conditional-expectation"
] | 11 | 2016-08-18T07:47:08 |
[] | 0 |
Science
| 0 |
364
|
mathematica
|
Changing FrontEnd automatic scrolling in version 8
|
In _Mathematica_ versions < 8, the FrontEnd has a very intelligent behavior:
* On evaluation, it by default automatically scrolls down the Notebook window to the last printed Output cell but also allows a user to scroll the Notebook window up by hand and then does not scroll down the window again automatically.
* If the user wishes to restore automatic scrolling it is sufficient to scroll down the window to the last currently printed Output cell and automatic scrolling will be restored.
I feel such behavior very comfortable. But in _Mathematica_ 8 we have no such behavior by default. I found that it may be partially restored by setting
SetOptions[$FrontEnd,
EvaluationCompletionAction -> {"ScrollToOutput"}]
But then it is not possible to stop automatic scrolling by scrolling the window by hands. It is possible to restore the old scrolling behavior?
* * *
Through _Mathematica_ 10.4 the old scrolling behavior has not been restored.
* Have any new options come on line to control this?
* Is there a hook to determine scroll position that might be used for a workaround?
* Could [`PrintTemporary`](http://reference.wolfram.com/language/ref/PrintTemporary.html) and/or [`Dynamic`](http://reference.wolfram.com/language/ref/Dynamic.html) (which is active only when visible) be used to simulate the old behavior?
* If the old behavior is simply not achievable what is the best alternative for a similar workflow?
|
https://mathematica.stackexchange.com/questions/1948/changing-frontend-automatic-scrolling-in-version-8
|
[
"front-end",
"customization",
"cells",
"printing"
] | 37 | 2012-02-17T20:42:53 |
[
"@Wizzerad In Mathematica version 12.0 your problem seems to be resolved.",
"@Wizzerad I suggest you to ask the tech support about it because (as you see) no one here seems to know how to get tiny control on the FrontEnd behavior while evaluating. If you get a solution, please post it here - it will help others.",
"In Mathematica 10, it does not seem possible to peacefully write some code while it is evaluating. It keeps jumping back and forth between the code line I am writing and the end of the input part of the section that is being evaluated. Is there a way to normally work on the code while Mathematica 10 is evaluating part of the notebook?",
"In version 10.0 I have that, by default, the front end always auto-scrolls to view new output, regardless of previous hand scrolls by the user. I take it from this question that I may disable all auto-scrolling by setting EvaluationCompletionAction to something else. Is that correct? Are there any other options as of version 10?",
"In March 2015 I send a suggestion to WRI tech support (CASE:2679254) with a link to this thread. They replied that it was forwarded to \"the appropriate people in our development group\", so I think we at least have a hope. I have nothing against the new section in my question.",
"Alexey, I added a new section to this question probing for alternatives and workarounds, and noting that the behavior has not been restored through 10.1 and is presumably not going to be. I hope you approve.",
"@AlexeyPopkov Ok :) good luck.",
"@shrx I mean at least Mathematica versions 5, 6 and 7 (I worked with).",
"@Alexey Popkov It is unclear, do you mean: \"In Mathematica versions < 8...\"?",
"@Sjoerd I agree that editing during evaluation was not always stable (of course, it does not mean that this is bad idea itself). The main advantage of the old behavior is the ability to switch between inspecting of already printed outputs and automatic scrolling.",
"Well, it happened to me quite a lot that I got drawn to the bottom of the print area at times I didn't want to be there and when I succeeded staying above the scroll area it wasn't very stable either.",
"@Sjoerd With the old behavior we could not only edit the input during evaluation (as we can by default now) but also inspect already printed outputs from the running evaluation (by scrolling the page). And all of this without loosing automatic scrolling when we wish! I see no advantages in loosing this.",
"Actually, I found the old behaviour rather annoying. Quite distracting when you were editing and running evaluations at the same time.",
"Welcome, Alexey!"
] | 14 |
Technology
| 0 |
365
|
mathematica
|
Is it possible to regain Mathematica 5.2's palette input focus behaviour with version 8.0?
|
Between Mathematica 5.2 and later versions, there has been a change in determining which notebook gets the palette input focus, which leads to quite unfortunate behaviour if you use focus-follows-mouse as I do.
For those who don't know focus-follows-mouse: This mode gives the keyboard input focus to the window which currently contains the mouse pointer. Now consider the following situations: There are two notebooks and a palette open (one of the notebooks might be the help window, which under Mathematica is basically just a notebook). Let's say, the first notebook is on the left, the second one is in the middle, and the palette is on the right. Let's also say that I'm working on the left notebook, and now want to use the palette. Therefore I move the mouse to the palette and press the button I want.
In Mathematica 5.2 this had exactly the intended effect: Mathematica remembered that I last interacted with the left notebook, and inserted the input generated by the palette there. However in Mathematica 8.0 the input is instead inserted in the _right_ notebook.
Of course it's not hard to understand how this happens (and indeed, the problem is not restricted to Mathematica, but occurs also e.g. with Gimp; however the fact that Mathematica 5.2 got it right gives me hope that it's just some option I have to toggle or some initialization I have to edit to regain the behaviour). When I move the mouse pointer from the left notebook to the palette, it of course crosses the right notebook which is in between, and therefore focus-on-mouse gives the keyboard input focus to that window. Of course it loses the input focus again when I move on to the palette (which ultimately gets the input focus), but newer Mathematica versions obviously record which window had input focus last and use _that_ to determine which notebook the palette should act on.
Therefore my question is: Is there some option I can set to recover the old Mathematica-5.2 behaviour? Or if not, is there a way to get it with some Mathematica programming (which I then could put into an init file)?
**Edit**
As I said in a comment to an answer that was deleted in the mean time, I've found out that Mathematica 7 still behaves in the way I want. So the change was obviously introduced in between 7 and 8. I'm adding that here so that the information is visible to those who cannot see deleted answers (and in case the deleted answer including comments might eventually go away completely).
Also note that (as I also wrote in that comment) `SelectedNotebook[]` changes the same way (i.e. follows focus unconditionally in version 8, but changes only on interaction in v7).
|
https://mathematica.stackexchange.com/questions/857/is-it-possible-to-regain-mathematica-5-2s-palette-input-focus-behaviour-with-ve
|
[
"front-end",
"customization"
] | 14 | 2012-01-28T04:30:14 |
[
"It is not the case anymore, right?",
"I'd like to add a related difference between 5.2 and 8 which annoyes me a bit: select (highlight) any Keyword, say \"Solve\", in the code, then press F1 to get help. The window with the help system opens and displayes the explanation of that keyword. Now continue to work in your notebook hereby moving the cursor to another position. Now you wish to come back to the help for our keyword \"Solve\". Pressing F1 in 5.2 leads you where you want, 8 doesn't (but to the character at your pursor position). So some automatic focusing mechanism in 8 leads to an unwanted (from my point of view) effect.",
"AFIK, OS X doesn't have focus-follows-mouse. Keyboard focus remains with the active window even if the mouse focus moves to, say, a floating palette.",
"Ah, and note that those previous versions show the good behaviour runnning on the very same computer, under the very same operating system. Oh, and I just noted that I made a mistake in my description of focus-follows-mouse; fixed now.",
"Do you have Focus-follows-mouse on OS X (I never worked with it, so I can't tell, but e.g. Windows doesn't have it)? Because the problem description only makes sense with Focus-follows-mouse. The point is, previous versions of Mathematica contain code that creates the good behaviour (and that code must have been there on purpose, because a naive implementation will give the exact behaviour I see with 8.0.0.0), and therefore I think that code might also be in 8.0.0.0, but just not activated for some reason (I hope it's still there). Of course it could also be a bug which was fixed in 8.0.4.",
"I can not reproduce this problem with Mathematica 8.0.4 running under OS X 10.6.8 on a Mac. Are you sure this isn't a problem with your OS?"
] | 6 |
Technology
| 0 |
366
|
mathematica
|
What is the complete list of ExportPacket formats?
|
`ExportPacket` is the most fundamental way we can turn `Box` forms into string data in other formats via the FE.
For example, a classic way this works is:
First@FrontEndExecute@
ExportPacket[Cell@BoxData@ToBoxes@Hyperlink["asdasd", "asdasd"],
"PlainText"]
"asdasd"
Or
First@FrontEndExecute@
ExportPacket[Cell@BoxData@ToBoxes@Hyperlink["asdasd", "asdasd"],
"InputText"]
"Hyperlink[\"asdasd\", {\"asdasd\", None}]"
It's enormously useful, but obviously entirely undocumented.
Now I'm interested in figuring out what other options it can take. Currently I've scraped up the following formats:
{
"PostScript", "InputForm", "GIF",
"BoundingBox", "NotebookString",
"EnhancedMetafile", "Metafile", "MGF",
"CDFNotebookString", "PDF",
"PICT", "BitmapPacket",
"InputText", "PlainText", "DefaultText" (* thanks to John Fultz for this one *)
}
But it can take more [per John Fultz](https://mathematica.stackexchange.com/questions/1319/how-do-i-extract-the-contents-of-a-selected-cell-as-plain-text/1411#comment3519_1411)
So what's the complete list of formats? As a real long shot, what're the other options it can take?
* * *
### Scraping notes
For those interested, I just scraped those formats from the `DownValues` of various `Export` functions:
System`ConvertersDump`installSource[
DeleteMissing@
Flatten@Lookup[Quiet[getFormatExportData /@ $ExportFormats],
"Sources"],
Automatic
];
epfuncs =
Quiet@
ToExpression[DeleteDuplicates@Join[Names["*`*"], Names["*`*`*"]],
StandardForm,
Function[Null,
If[FreeQ[DownValues[#], ExportPacket | FrontEnd`ExportPacket],
Nothing,
#
],
HoldAllComplete
]
];
epdats =
AssociationMap[
Cases[
DownValues@#,
p : (ExportPacket | FrontEnd`ExportPacket)[_, type_, opts___] :>
{
"Type" -> type,
"Opts" -> {opts}
},
\[Infinity],
Heads -> True
] &,
epfuncs
];
Cases[Flatten@Values@epdats, ("Type" -> t_) :> t] // DeleteDuplicates
|
https://mathematica.stackexchange.com/questions/165065/what-is-the-complete-list-of-exportpacket-formats
|
[
"front-end",
"undocumented"
] | 12 | 2018-02-02T15:10:38 |
[
"@ChipHurst it exists in 12.1 too",
"In 12.3 it looks like Rasterize got a performance boost through \"ImageObjectPacket\". I don't know if this format is in older versions though."
] | 2 |
Technology
| 0 |
367
|
mathematica
|
Creating compiled search TRIE file for argument string completion
|
I'd like to generate a compiled search TRIE file (like those found in `$InstallationDirectory\SystemFiles\FrontEnd\SystemResources\FunctionalFrequency\`) to help implement the auto-completion feature on user-defined functions.
As described in [File-name completion for custom functions](https://mathematica.stackexchange.com/q/56984), I could directly add a list of strings into the specialArgFunctions.tr file. However, my lists of arguments is long and likely to change during development, so I'd prefer to point to a TRIE file that I can update as needed.
How can I create a compiled TRIE file? Can this be done in Mathematica or Workbench? Is the TRIE file extension structure documented anywhere?
* * *
> b3m2a1 comment:
>
> These files are here in v11:
>
>
> FileNames["*.trie",
> PacletFind["AutoCompletionData"][[1]]["Location"],
> Infinity
> ]
>
>
> Similarly in 11 is [`CA`CADumpTriePacket`](https://mathematica.stackexchange.com/a/13452/38205) which seems to extract the properties from a trie file, but not the trie format itself.
|
https://mathematica.stackexchange.com/questions/113116/creating-compiled-search-trie-file-for-argument-string-completion
|
[
"front-end",
"files-and-directories",
"customization",
"autocomplete"
] | 12 | 2016-04-20T06:58:44 |
[
"@Szabolcs, thanks for the suggestion. I will try it out and let you know if I make any progress. Thanks again!",
"You may be interested in this: mathematica.stackexchange.com/a/129910/12 It shows how to add completion without editing specialArgFunctions.tr. What I don't know how to do is enable completion of non-string arguments, as all examples that do this use .trie files and I don't know how to create such files.",
"@DanielLichtblau, Thanks for the link to 441, but I am not asking about general practices for implementing a trie data structure in Mathematica. I am looking for the specific TRIE file structure used by Mathematica's FrontEnd.",
"Duplicate of 441?"
] | 4 |
Technology
| 0 |
368
|
mathematica
|
How to modify NDSolve`StateData without crashing the kernel?
|
Probably a hard question, but it's better to cry out loud.
[Reminded by Chris K](https://chat.stackexchange.com/transcript/message/51136457#51136457), I noticed my [`fix`](https://mathematica.stackexchange.com/a/129193/1871) function has been broken since _v11.3_. After some checking, I found `NDSolve`StateData[…]`, which is not an atom in _v9_ and becomes an atom now, can no longer be modified with pattern matching.
To keep `fix` up to date, I tried the solution in this [excellent post](https://mathematica.stackexchange.com/q/96225/1871) about modifying data inside atom, sadly the modified `NDSolve`StateData` crashes the kernel in the `NDSolve`ProcessSolutions` stage **even in _v9_** and I can't figure out a workaround.
The following is a simplified example attempting to modify the difference order to `2`. The last line crashes the kernel of course so please save your work before testing.
tmax = 10; lb = 0; rb = 5;
system = With[{u = u[t, x]}, {D[u, t] == D[u, x, x], u == 0 /. t -> 0,
u == Sin[t] /. x -> lb, u == 0 /. x -> rb}];
{state} = NDSolve`ProcessEquations[system, u, {t, 0, tmax}, {x, lb, rb}];
teststate =
state /. a_NDSolve`FiniteDifferenceDerivativeFunction :>
RuleCondition@
NDSolve`FiniteDifferenceDerivative[a@"DerivativeOrder", a@"Coordinates",
"DifferenceOrder" -> 2, PeriodicInterpolation -> a@"PeriodicInterpolation"];
Head[#]["DifferenceOrder"] & /@
teststate["NumericalFunction"]["FunctionExpression"][[2, 1]]
(* Should give {{2}} if the replacement succeeds. *)
(*Failed attempt: *)
ml = LinkCreate[LinkMode -> Loopback];
LinkWrite[ml, With[{e = state}, Hold[e]]];
holdstate = LinkRead[ml];
newstate = holdstate /.
a_NDSolve`FiniteDifferenceDerivativeFunction :>
RuleCondition@
NDSolve`FiniteDifferenceDerivative[a@"DerivativeOrder", a@"Coordinates",
"DifferenceOrder" -> 2, PeriodicInterpolation -> a@"PeriodicInterpolation"] //
ReleaseHold
NDSolve`Iterate[newstate, tmax]
(*Warning: the following line crashes the kernel.*)
u /. NDSolve`ProcessSolutions@newstate
Any way to avoid the crashing and fix my `fix` function?
* * *
A bit of spelunking shows this seems to be (at least partly) related to `Internal`Bag`. A minimal example:
bag = Internal`Bag[]
ml = LinkCreate[LinkMode -> Loopback];
LinkWrite[ml, With[{e = bag}, Hold[e]]]
holdbag = LinkRead[ml]
LinkClose[ml]
ReleaseHold[holdbag] === bag
(* False *)
|
https://mathematica.stackexchange.com/questions/202668/how-to-modify-ndsolvestatedata-without-crashing-the-kernel
|
[
"differential-equations",
"numerics",
"undocumented",
"compatibility"
] | 11 | 2019-07-24T07:41:44 |
[
"I think the relation to Bag is only really in the fact that both are mutable object stored at the C++ level"
] | 1 |
Technology
| 0 |
369
|
mathematica
|
Customized StepSize control in unconstrained gradient-descent optimization
|
I have a type of minimization problem I frequently solve that involves warping a large 2D triangle-mesh (~25,000 vertices) to fit a model. In the mesh, vertices carry the empirical measurements (each of which has a predicted position in the model, which is a continuous field). The potential/objective function for the system is something like this:
potentialFn[{X0_List, triangles_List}, Xideal_List, X_List] := With[
{elens0 = EdgeLengths[X0, triangles],
elens = EdgeLengths[X, triangles],
angs0 = AngleValues[X0, triangles],
angs = AngleValues[X, triangles]},
Plus[
Total[(elens0 - elens)^2],
Total[(angs0 - angs)^2],
Total@Exp[-(X - Xideal)^2],
0.25*Total[1 + (angs0/angs)^2 - 2*(angs0/angs))]]];
The idea is that the potential is equal to the sum of the squared deviations in the distances between vertex neighbors (the edges) plus the sum of the squared deviation in the angles of the mesh, plus the goal function, which is 0 when the vertices are perfectly aligned with the model and otherwise monotonically increasing with distance from an ideal fit, and finally a term that is designed to make the potential approach infinity as any triangle approaches inversion; i.e., the last term (which is similar to models of the van der Waals potential) constrains the potential such that triangle ABC would have to cross a singularity in order to becomes triangle ACB (in terms of a counterclockwise ordering of vertices). Additionally I have well-tested analytical functions that calculate the gradients (but nothing for the Hessians). I've confirmed that the gradient descent works correctly by, among other things, running a gradient descent search with a very small step-size for a very long time.
Most of those details are irrelevant to my question, however. What is important is that I can, for any set of vertex coordinates that are valid (i.e., no triangles have been inverted), calculate a maximum steps-size `S` such that for any actual step size `s` (`0 < s < S`) the resulting vertex coordinates will also be valid; so long as my step sizes always follow this rule, I can guarantee that no triangles will invert. The problem I have is that there doesn't seem to be a way for me to provide this information to the Mathematica line-search algorithm in functions like `FindMinimum`.
Initially, I thought that something like this would be the solution:
FindMinimum[
potentialFn[{X0, triangles}, X],
{X, X0},
Gradient :> potentialGradient[{X0, triangles}, X],
Method -> {
"ConjugateGradient",
"StepControl" -> {
"LineSearch",
"MaxRelativeStepSize" :> CalculateMaxRelativeStepSize[{X0, triangles}, X]}}]
This, however, gives me an error (`FindMinimum::lsmrss`, with message "-- Message Text not found -- `(CalculateMaxRelativeStepSize[{X0, triangles}, X])`") that I can only assume is due to FindMinimum's inability to interpret the delayed-rule. I've spent a lot of time looking through the Mathematica documentation on conjugate-gradient and related unconstrained search and have found nothing that indicates that I can actually control the step-size aside from setting a permanent step-size length relative to the norm of the total gradient. That is fairly useless in this kind of case, unfortunately --- I can use it, but it results in a very slow search.
My question is this: are there existing (undocumented?) methods for providing Mathematica's line-search method with a way to calculate a maximum gradient step-size?
Note: I realize I haven't provided a minimal working example of this problem. I can do so, but this would be quite an undertaking as there is a lot of context around the specific problem instances --- if anyone believes that they can help me with this kind of optimization given an example, I will work on this, but I'd appreciate some indication that the work of translating the problem into a compact instance won't be for naught before I do it.
|
https://mathematica.stackexchange.com/questions/105093/customized-stepsize-control-in-unconstrained-gradient-descent-optimization
|
[
"mathematical-optimization"
] | 11 | 2016-01-28T10:43:41 |
[
"Might an appropriate value for \"RestartThreshold\" be helpful? See Nonlinear Conjugate Gradient Methods.",
"I interpreted you last paragraph differently. Sorry. In any case, I would at least take a serious look at your problem, if I could run your code.",
"@bbgodfrey Thanks for pointing out the minor syntax error, but the note at the end of the original post was included precisely so that people would not try to copy/paste the code then post comments like that.",
"You have one too many ) in the last line of potentialFn. Also, potentialGradient and CalculateMaxRelativeStepSize are undefined. EdgeLengths and AngleValues also appear to be undefined or misused.",
"I should also note that the approach in the above comment quietly crashes the kernel as often as not somewhere in the middle of the optimization.",
"I should have noted that the best solution I've found is to use the StepMonitor argument; whenever the previous step invalidates a triangle, I use Throw/Catch to exit the minimization and restart it with a smaller \"MaxRelativeStepSize\". It seems clunky and is definitely not optimal, but it works for now."
] | 6 |
Technology
| 0 |
370
|
mathematica
|
How can autocomplete entries be added for DownValues and Properties?
|
I'd like to assign `DownValues` to a symbol like this
x["firstvalue"] = 1;
x["secondvalue"] = 2;
And then have the `DownValue` keys `"firstvalue"` and `"secondvalue"` offered in an autocomplete list once I have entered `x[`.
Another useful application for this kind of modification is to have the `"Properties"` of an object like a `FittedModel` added to the autocomplete list.
It is not always easy to remember `"SinglePredictionConfidenceIntervalTableEntries"`, which is one of the 64 properties of `FittedModel`.
Can this be done? What controls the entries in the _autocomplete "engine"_?
### Addendum
[This amazing answer](https://mathematica.stackexchange.com/a/16710/92) shows a way of altering the behavior of autocomplete. I do not yet understand how it works.
|
https://mathematica.stackexchange.com/questions/27629/how-can-autocomplete-entries-be-added-for-downvalues-and-properties
|
[
"front-end",
"customization",
"autocomplete"
] | 11 | 2013-06-25T11:45:19 |
[
"There is not much hope you achieve this with version 9.0.1. My answer you were referring to only works for 9.0. since back then the autocompletion was only partially done by the frontend. Nowadays, I don't see a way to interfere (let's call it improve) with the autocompletion happening in the Mathematica front end. In my latest project this would be possible to implement, but it's not the front end.",
"Well, that is far and away superior to my ad hoc solution. I may have to expropriate it ... :)",
"Ah for this case I've added the Keys function from this answer to my toolchain.",
"Not on thisFit, obviously. But, DownValues[x][[All,1,1,1]] is stymied by fcn[] := . So, I've been using Case, but you have to get the pattern correct.",
"@rcollyer, I don't understand your question. I can always call thisFit[\"Properties\"] to find the properties of the fit or DownValues[x][[All,1,1,1]] to get the keys of x. You have improvements upon these methods?",
"I can't answer the autocomplete question, but do you want a mechanism for automatically updating the properties list?"
] | 6 |
Technology
| 0 |
371
|
mathematica
|
Different timing with large array assignment
|
Consider the following example
n = 10^8;
AbsoluteTiming[
A = ConstantArray[1., n]; // anyFunc;
A[[2 ;; ;; 2]] = -1.;
]
> {0.388108, Null}
n = 10^8;
AbsoluteTiming[
B = ConstantArray[1., n];
B[[2 ;; ;; 2]] = -1.;
]
> {0.738504, Null}
A == B
> True
Why is the first approach faster? The only difference is, there is an arbitrary function outside the first one. I am using Mathematica 12 on windows 10 64 bit.
|
https://mathematica.stackexchange.com/questions/210417/different-timing-with-large-array-assignment
|
[
"list-manipulation",
"performance-tuning",
"packed-arrays"
] | 10 | 2019-11-28T02:11:43 |
[
"This $HistoryLength = 0 can be relevant in wolframscript, cause there you cannot have access to history even though is set to infinity by default. It would make sense to set it to zero if it gives performance increase.",
"@HenrikSchumacher: I have no idea, and the days of being motivated to dig way under the weighted covers to find out have faded. I found this and similar behaviors years ago, as did others, but the how/why of this particular case... beats me. Perhaps a Wolfram engineer might chime in.",
"@HenrikSchumacher I'd bet that the history tracking of the kernel keeps a reference to B but not A that causes the tensor to be copied when some elements are changed to -1. That the tracking would be turned off when one sets $HistoryLength = 0 makes sense; but why a reference to B exists but not to A -- or why a reference to B exists at all -- I cannot explain.",
"@ciao Wow, that does help, but I wonder why it does so. Can you explain that. It is not like DownValues[Out] were totally cluttered...",
"Set $HistoryLength=0"
] | 5 |
Technology
| 0 |
372
|
physics
|
Simple argument for unexpected behavior in SUSY model
|
Consider a supersymmetric theory with 3 chiral superfields, $X, \Phi_1$ and $\Phi_2,$ with canonical Kahler potential and superpotential $$ W= \frac12 h_1 X\Phi_1^2 +\frac12 h_2 \Phi_2\Phi_1^2 + fX.$$ One can show, by doing calculations, that (i) supersymmetry is spontaneously broken, but (ii) one-loop corrections do not lift the classical pseudo-moduli space.
QUESTION: is it possible to say (ii) without looking at the explicit form of Coleman-Weinberg potential, e.g. making some field redefinition which shows that this is not an interacting theory and it is very close to Polonyi model?
|
https://physics.stackexchange.com/questions/76207/simple-argument-for-unexpected-behavior-in-susy-model
|
[
"homework-and-exercises",
"supersymmetry",
"symmetry-breaking"
] | 15 | 2013-09-04T05:35:16 |
[
"@Nicolo': Comparing your calculus, with the calculus done for O’Raifeartaigh model, maybe you can check the step, or the steps, where the difference happens. Maybe this will give an idea for some possible \"rule\".",
"right, but this is not the O'Raifeartaigh model",
"This does not seem obvious .Following this paper, it we take a model similar to yours, but different, the O’Raifeartaigh model $(7.32)$, one may show that quantum corrections lift the classical pseudo-moduli space, see Chapter $7.4$ and formula $7.51$.",
"Yay, a good question after so many days! I hope you get an answer..."
] | 4 |
Science
| 0 |
373
|
physics
|
Extended Born relativity, Nambu 3-form and ternary ($n$-ary) symmetry
|
**Background:** Classical Mechanics is based on the Poincare-Cartan two-form
$$\omega_2=dx\wedge dp$$
where $p=\dot{x}$. Quantum mechanics is secretly a subtle modification of this. On the other hand, the so-called Born-reciprocal relativity is based on the "phase-space"-like metric
$$ds^2=dx^2-c^2dt^2+Adp^2-BdE^2$$
and its full space-time+phase-space extension:
$$ds^2=dX^2+dP^2=dx^\mu dx_\mu+\dfrac{1}{\lambda^2}dp^\nu dp_\nu$$
where $$P=\dot{X}$$
Note: particle-wave duality is something like $ x^\mu=\dfrac{h}{p_\mu}$.
In Born's reciprocal relativity, you have the invariance group which is the _intersection_ of $SO (4 +4)$ and the ordinary symplectic group $Sp (4)$, related to the invariance under the symplectic transformations leaving the Poincaré-Cartan two-form invariant. The intersection of $SO(8)$ and $Sp(4)$ gives you, essentially, the unitary group $U (4)$, or some "cousin" closely related to the metaplectic group.
We can try to guess an extension of Born's reciprocal relativity based on higher accelerations as an interesting academical exercise (at least it is for me). To do it, you have to find a symmetry which leaves spacetime+phasespace invariant, the force-momentum-space-time extended Born space-time+phase-space interval
$ds^2=dx^2+dp^2+df^2$
with $p=\dot{x}$, $ f=\dot{p}$ in this set up. Note that it is the most simple extension, but I am also interested in the problem to enlarge it to extra derivatives, like Tug, Yank,...and n-order derivatives of position. Let me continue. This last metric looks invariant under an orthogonal group $SO (4+4+4) = SO (12)$ group (you can forget about signatures at this moment).
One also needs to have an invariant triple wedge product three-form
$$\omega_3=d X\wedge dP \wedge d F$$
something that seems to be connected with a Nambu structure and where $P=\dot{X}$ and $F=\dot{P}$ and with invariance under the (ternary) 3-ary "symplectic" transformations leaving the above 3-form invariant.
**My Question(s):** I am trying to discover some (likely nontrivial) Born-reciprocal like generalized transformations for the case of "higher-order" Born-reciprocal like relativities (I am interested in that topic for more than one reason I can not tell you here). I do know what the phase-space Born-reciprocal invariance group transformations ARE (you can see them,e.g., in this nice thesis [BornRelthesis](http://eprints.utas.edu.au/10689/2/Whole.pdf)) in the case of reciprocal relativity (as I told you above). So, my question, which comes from the original author of the extended Born-phase space relativity, **Carlos Castro Perelman in** [this paper](http://vixra.org/abs/1302.0103), and references therein, is a natural question in the context of higher-order Finsler-like extensions of Special Relativity, and it eventually would include the important issue of curved (generalized) relativistic phase-space-time. After the above preliminary stuff, the issue is:
> What is the intersection of the group $SO (12)$ with the _ternary_ group which leaves invariant the triple-wedge product
>
> $$\omega_3=d X\wedge dP \wedge d F$$
More generally, I am interested in the next problem. So the extra or bonus question is: what is the ($n$-ary?) group structure leaving invariant the ($n+1$)-form
$$ \omega_{n+1}=dx\wedge dp\wedge d\dot{p}\wedge\cdots \wedge dp^{(n-1)}$$
where there we include up to ($n-1$) derivatives of momentum in the exterior product or equivalently
$$ \omega_{n+1}=dx\wedge d\dot{x}\wedge d\ddot{x}\wedge\cdots \wedge dx^{(n)}$$
contains up to the $n$-th derivative of the position. In this case, the higher-order metric would be:
$$ds^2=dX^2+dP^2+dF^2+\ldots+dP^{(n-1)^2}=dX^2+d\dot{X}^2+d\ddot{X}^2+\ldots+dX^{(n)2}$$
This metric is invariant under $SO(4(n+1))$ symmetry (if we work in 4D spacetime), but what is the symmetry group or invariance of the above ($n+1$)-form and whose intersection with the $SO(4(n+1))$ group gives us the higher-order generalization of the $U(4)$/metaplectic invariance group of Born's reciprocal relativity in phase-space?
This knowledge should allow me (us) to find the analogue of the (nontrivial) Lorentz transformations which mix the
$X,\dot{X}=P,\ddot{X}=\dot{P}=F,\ldots$
coordinates in this enlarged Born relativity theory.
**Remark:** In the case we include no derivatives in the "generalized phase space" of position (or we don't include any momentum coordinate in the metric) we get the usual SR/GR metric. When n=1, we get phase space relativity. When $n=2$, we would obtain the first of a higher-order space-time-momentum-force generalized Born relativity. I am interested in that because one of my main research topics is generalized/enlarged/enhanced/extended theories of relativity. I firmly believe we have not exhausted the power of the relativity principle in every possible direction.
I do know what the transformation is in the case where one only has $X$ and $P$. I need help to find and work out the nontrivial transformations mixing $X,P$ and higher order derivatives...The higher-order extension of Lorentz-Born symmetry/transformation group of special/reciprocal relativity.
|
https://physics.stackexchange.com/questions/61522/extended-born-relativity-nambu-3-form-and-ternary-n-ary-symmetry
|
[
"classical-mechanics",
"symmetry",
"differential-geometry",
"group-theory"
] | 30 | 2013-04-18T09:55:29 |
[
"@lurscher A generalized exteded relativity theory, beyond the one pioneered by Castro, Born, Cainiello, and many others in other \"flavors\" ...And where the principles of relativity and quantum mechanics merge and get generalized, much like your work on categories and branes generalize point particles...Quite impressive and stunning! I presented my own \"roadmap\" towards \"ultimate\"(final?) relativity in Slovenia, IARD 2016...Anyway, I had not too much time to develop the ideas presented there. I hope that change in the near future.",
"i don't have a clue where you are going with all this, but it definitely looks interesting :-)"
] | 2 |
Science
| 1 |
374
|
physics
|
Is there a null incomplete spacetime which is spacelike and timelike complete?
|
Geodesic completeness, the fact we can make the domain of the geodesic parametrized with respect an affine parameter the whole real line, is an important concept in GR. Especially, because the lack of it signals singularities.
This raises the question if incompleteness of one type of geodesic implies incompleteness of the rest. In general this is not the case.
I've found examples of the following scenarios:
* timelike complete, spacelike and null incomplete
* spacelike complete, timelike and null incomplete
* null complete, timelike and spacelike incomplete
* timelike and null complete, spacelike incomplete
* spacelike and null complete, timelike incomplete
Is it possible to construct a spacelike and timelike complete spacetime but null incomplete?
|
https://physics.stackexchange.com/questions/141643/is-there-a-null-incomplete-spacetime-which-is-spacelike-and-timelike-complete
|
[
"general-relativity",
"differential-geometry",
"spacetime",
"mathematical-physics",
"geodesics"
] | 11 | 2014-10-16T07:49:08 |
[
"@BenCrowell I agree that in general singularities are discuss in other terms. Nevertheless, Penrose and Hawking theorems only imply geodesic incompleteness.",
"@yess: I see. Sorry, my comments were not that helpful then. I'll delete them.",
"@BenCrowell Yes I meant that. Sorry for the sloppy definition.",
"@MBN Yes, in fact, I look for all the examples but couldn't find the one Geroch mention as an open problem. However the paper that John links seems to settle the mattter.",
"@JohnRennie Thank you very much the paper seems very interesting",
"I seem to remember a paper by Geroch, where this was stated as an open problem. But the paper was probably from the 70's.",
"This paper claims to describe just such a spacetime, however it's behind a paywall.",
"@BenCrowell The term \"parametrization of a geodesic\" does not make any sense without being compatible with the ambient differentiable structure. I.e. the differentiable structure is fixed and the $\\sim \\mathbb{R}$ structure is unambiguously confirmed by finding a single $\\mathbb{R}$-parametrization. It is a good definition.",
"[...] create the definition of a naked singularity, which is pretty intricate and doesn't just refer to what types of geodesics hit it.",
"Discussions I've seen of this kind of thing actually talk about the characteristics of the singularity, not the characteristics of the incomplete geodesics. The main cases of interest are spacelike singularities (e.g., a black hole's singularity) and timelike singularities (e.g., cosmological ones, or naked singularities). Googling does turn up discussion of null singularities as well. I'm not convinced that the classification in terms of the characteristics of the complete and incomplete geodesics is actually useful or consistent. If so, then it would seem to have been unnecessary to [...]"
] | 10 |
Science
| 0 |
375
|
physics
|
Equation of motion for cyclic model of the universe
|
I recently started to study the cyclic universe. I came across this [article](http://www.physics.princeton.edu/%7Esteinh/sciencecyc.pdf) [1]. My question is about the action used for describing the cyclic model: $$S = \int d^{4}x\sqrt{-g}(\frac{1}{16\pi G}R-\frac{1}{2}(\partial\phi)^{2}-V(\phi)+\beta ^{4}(\phi)(\rho _{M}+\rho _{R}))$$ where $R$ is Ricci scalar and $g$ is the metric. I solve the Euler-Lagrange equation for this action and find the equation of motion for $\phi$: $$\partial _{\mu }\frac{\partial L}{\partial (\partial _{\mu }\phi )} - \frac{\partial L}{\partial _{\mu }\phi } = 0$$ $$\Rightarrow \partial _{\mu }\left [ \frac{1}{2} \sqrt{-g} g^{\alpha \beta }\left ( \delta _{\mu }^{\alpha } \partial _{\beta }\phi + \delta _{\mu }^{\beta }\partial _{\alpha}\phi \right )\right ] = \sqrt{-g}\left ( -V_{,\phi }+4\beta ^{3} \beta _{,\phi }(\rho _{M}+\rho _{R})\right )$$ The radiation term is independent of $\phi$ so only $\rho _{M}$ enters the equation of motion. The zero component: $$3 a^{2} \dot{a} \dot{\phi } + a^{3}\ddot{\phi }=a^{3}(-V_{,\phi }+4 \beta ^{3}\beta _{,\phi }\rho _{M})$$ $$\Rightarrow\ddot{\phi }+3H\dot{\phi }= -V_{,\phi }+4 \beta ^{3}\beta _{,\phi }\rho _{M}$$ where $H$ is the Hubble parameter. But this result is different from what was written in the article. The difference is the coefficient of the last term. In addition, I have a problem finding Friedmann equations for this action (again in finding coefficients). Can anybody elaborate on the reason?
**Reference:**
[1] P.J. Steinhardt and N. Turok, _" A Cyclic Model of the Universe,"_ Science 296 (2002), available at [here](http://www.physics.princeton.edu/%7Esteinh/sciencecyc.pdf)
|
https://physics.stackexchange.com/questions/88894/equation-of-motion-for-cyclic-model-of-the-universe
|
[
"cosmology",
"lagrangian-formalism",
"cosmological-inflation"
] | 11 | 2013-12-03T12:52:45 |
[
"Seems strange. You have $2$ papers with same authors (J. Steinhardt and Neil Turok) and a different sign... Sounds like a typo for the equation ($1$) of the first paper.",
"I think that $\\triangle\\phi$ is zero and $\\phi$ only depends on time @Trimok.",
"It is not completely clear for me what $(\\partial \\phi)^2$ is exactly. If it is $(\\partial \\phi)^2 = -(\\partial_0 \\phi)^2+(\\vec \\partial \\phi)^2$, then the Euler-Lagrange equations give a $\\square \\phi = \\ddot \\phi - \\triangle\\phi$, and not a $\\ddot \\phi$. So, I think you need to reexpress, if possible, $\\triangle\\phi$ as a function of $\\beta, \\beta_{,\\phi}$, etc...",
"Thanks very much, @Trimok. That explains the factor 4, but what about the sign? I read this article. In this paper the sign of the coupling term is negative. Which one is correct?",
"For the problem of the factor $4$, from the equation $(5)$ (fluid equation of motion), you see that there is a dependence $\\rho_M \\sim \\hat a^{-3}\\sim (a\\beta(\\phi))^{-3}$, so the term $\\beta^4(\\phi)\\rho_m$ is (secretely...) proportionnal to $\\beta(\\phi)$"
] | 5 |
Science
| 0 |
376
|
physics
|
Radiative equilibrium in orbit of a black hole
|
According to [Life under a black sun](http://arxiv.org/pdf/1601.02897v1.pdf), Miller's planet from _Interstellar_ , with a time dilation factor of 60,000, should be heated to around 890C by blue-shifted cosmic background radiation.
How they arrive at that number, however, seems to me a little opaque.
As the article describes, there are two major effects to consider: gravitational blueshifting, and blue- and redshifts due to the planet's orbital motion.
Calculating the purely gravitational effects seems straightforward (although I admit I may still be missing something); given that radiative power is proportional to $T^4$, and power should scale linearly with the time dilation factor, the apparent CMB temperature should be $2.7K * 60,000^{1/4} = 42.26K$. Considering that a cold black hole occupies part of the sky, the equilibrium temperature of the planet should be slightly lower. That's clearly a long way from 890C!
It appears, then, that the majority of the heating must be a result of the circular motion of the planet in orbit. Now, it seems fairly obvious that getting precise answers will require numerical simulation, but it should be possible to at least get a close order-of-magnitude estimate based on a model of a planet moving at constant velocity through a background of the temperature calculated from gravitational effects alone. Unfortunately, though the article doesn't quote speeds, and I haven't been able to figure out how to calculate the relevant velocities for a planet is a low orbit around a rotating black hole.
So, can anybody help me fill in the blanks? If I start with a black hole of a given mass and angular momentum, and a planet in a stable circular orbit at some given radius, how do I get to an estimate of equilibrium temperature?
|
https://physics.stackexchange.com/questions/246203/radiative-equilibrium-in-orbit-of-a-black-hole
|
[
"general-relativity",
"thermodynamics",
"time-dilation",
"cosmic-microwave-background"
] | 12 | 2016-03-29T14:04:05 |
[
"Funny how things come around. I have been struggling with the same thing. I follow where the 890C comes from, given the illumination conditions that are deduced, but like you, struggle to immediately see where the blueshift factor of 275,000 came from. @Yuketerz to the rescue. physics.stackexchange.com/a/710922/43351",
"I was just about to embark on an answer but then noticed that the paper you reference is all about how to get to that answer. So you need to read it and then come back with what you don't understand. e.g. The calculation you have done is (a) incorrect and (b) not directly relevant to calculating the equilibrium temperature of the planet.",
"See Section 3 of this paper for calculating the velocity of circular orbits in Kerr spacetime."
] | 3 |
Science
| 0 |
377
|
physics
|
Topological quantum error correcting codes which are not CSS codes
|
The most promising-seeming quantum error correction codes for the medium-to-long term are the topological codes, of which the toric code (and variants such as planar surface codes) and colour codes are the main examples.
As with essentially all approaches to quantum error correction, these are stabiliser codes. It also happens that both the toric code and colour codes are CSS codes: that is, they have a set of stabiliser generators which consist of products of $X$ operators or products of $Z$ operators, on subsets of the qubits. The useful properties of these codes for quantum error correction do not seem to particularly rely on this fact (the much more obviously relevant property is that the code distance of these codes scale with the size of the system and can be described by topological properties). So it may simply be a coincidence that these are CSS codes -- but at the same time, it is intriguing that the codes with the best known thresholds seem to be CSS codes.
**Question.** Are there known examples of topological stabiliser codes which are neither CSS codes, nor equivalent to a CSS code up to local unitaries? Or is there a reason why such codes could not exist?
|
https://physics.stackexchange.com/questions/380501/topological-quantum-error-correcting-codes-which-are-not-css-codes
|
[
"quantum-information",
"quantum-error-correction"
] | 11 | 2018-01-17T02:44:48 |
[
"A google search brought this up: arxiv.org/abs/1012.0859 (\"Local non-CSS quantum error correcting code on a three-dimensional lattice\", Isaac H. Kim)"
] | 1 |
Science
| 0 |
378
|
rpg
|
What is this RPGA world/campaign design contest winner (ca. 2006?) I remember?
|
More than a decade ago, my best friend had an online subscription to various D&D materials. When he was back home visiting, he let me spend a few hours exploring this treasure trove. From what I can remember, there were fully indexed, digital versions of the current rule books. There were Q&A and discussion fora as well as online campaign settings somewhat like much older/slower play-by-mail. One of the campaign worlds I was particularly impressed with there was essentially a water world, where adventures took place on the high seas between islands and underwater in the deep.
What struck me the most, though, was the world design contest winner from an RPGA network contest that I had found in the campaign settings. As I recall, it was set on a world where the great heroes had just failed to prevent the invasion of some extra-planar entities that meant to take total control. Completely blown away by the unusual concept of the starting point, I had noted the name of this world and campaign setting but lost it over the years. My friend could not remember it and no longer had his subscription. I would very much like to relearn the name of this world and get more details about it as a campaign setting. Any assistance is much appreciated.
|
https://rpg.stackexchange.com/questions/181613/what-is-this-rpga-world-campaign-design-contest-winner-ca-2006-i-remember
|
[
"dungeons-and-dragons",
"product-identification"
] | 16 | 2021-03-06T05:10:05 |
[
"The subscription service could be DnD Insider? That however only existed from about 2009 onwards rpg.stackexchange.com/questions/193504/…",
"@KorvinStarmast It wasn’t, sadly, which is why I didn’t put an answer on this.",
"@KRyan Hoping your research is productive",
"Looking into this more, it seems that WotC went out of their way to not discuss the non-winning entries to that contest. Of the three finalists, only Eberron is known since the rules of the contest included the rights to all finalist submissions becoming property of Wizards of the Coast and the authors signing NDAs about them. We know that Rich Burlew (of Order of the Stick fame) was one and Nathan Toomey was the other, but nothing about what they wrote. Of the semi-finalists, 5/8 have been published elsewhere, and the other 3/8 are unknown. Investigating those 5.",
"Notably, the winner of the 2002 RPGA contest was Eberron, which is still a major campaign setting published by Wizards of the Coast. Definitely doesn’t match this description, but so far as I know, that was the only contest of the sort that Wizards held—certainly the only one that resulted in publication of the setting that won. Is it possible that this was runner-up or honorable mention or something from that contest?",
"Thanks for the advice, Jason_c_o. While I realised the superfluous details could be distracting, they also provided a context I thought might help. No, the water world campaign is not the one about which I was asking (though I wouldn't mind learning its name and more about it, too). It was the WoC subscription service with all the manuals, rulebooks, campaign settings and adventures that mentioned the invaded world setting. I recall my friend's visit as being in late summer 2006, the time at which I saw the invaded world campaign: it had already won an RPGA world design contest.",
"Just to be make sure: The water-world you mention is not the world design contest winner you're looking for, correct? If so, you may want to cut some of the tangential information, it's distracting. Also, can you remember anything else? What year the contest was held (The RPGA Net ran many design contests, and took submissions for tournaments), or what book/magazine mentioned it? Is 2006 when you saw it, or when it was made?",
"Welcome to RPG.SE! Take the tour if you haven't already and see the help center or ask us here in the comments (use @ to ping someone) if you need more guidance. Good Luck and Happy Gaming!"
] | 8 |
Culture & Recreation
| 0 |
379
|
stackoverflow
|
Compatibility of impredicative Set and function extensionality
|
The Coq [FAQ](https://coq.inria.fr/faq#htoc41) says that function extensionality is consistent with predicative `Set`. It's not fully clear to me from this whether it's consistent with impredicative `Set` (or maybe the consistency is unknown in that case).
|
https://stackoverflow.com/questions/40395946/compatibility-of-impredicative-set-and-function-extensionality
|
[
"rocq-prover"
] | 31 | 2016-11-03T00:27:09 |
[
"This question has been mentioned by Zimm i48 in this related Coq-Club thread (this post specifically).",
"If you're considering migration then cstheory.SE would be another good choice, imo.",
"Additionally, this question is about the theory of Coq rather than a programming issue with Coq and should probably be migrated to MathOverflow where it has more chances of being answered in details.",
"This sounds a bit misleading; I imagine that it is consistent with impredicative Set as well. But that's probably a better question for the Coq mailing list."
] | 4 |
Technology
| 0 |
380
|
stackoverflow
|
How can (<*) be implemented optimally for sequences?
|
The `Applicative` instance for `Data.Sequence` generally performs very well. Almost all the methods are _incrementally asymptotically optimal_ in time and space. That is, given fully forced/realized inputs, it's possible to access any _portion_ of the result in asymptotically optimal time and memory residency. There is one remaining exception: `(<*)`. I only know two ways to implement it as yet:
1. The default implementation
xs <* ys = liftA2 const xs ys
This implementation takes `O(|xs| * |ys|)` time and space to fully realize the result, but only `O(log(min(k, |xs|*|ys|-k)))` to access just the `k`th element of the result.
2. A "monadic" implementation
xs <* ys = xs >>= replicate (length ys)
This takes only `O(|xs| * log |ys|)` time and space, but it's not incremental; accessing an arbitrary element of the result requires `O(|xs| * log |ys|)` time and space.
I have long believed that it should be possible to have our cake and eat it too, but I've never been able to juggle the pieces in my mind well enough to get there. To do so appears to require a combination of ideas (but not actual code) from the implementations of `liftA2` and `replicate`. How can this be done?
* * *
Note: it surely won't be necessary to incorporate anything like the `rigidify` mechanism of `liftA2`. The `replicate`-like pieces should surely produce only the sorts of "rigid" structures we use `rigidify` to get from user-supplied trees.
### Update (April 6, 2020)
Mission accomplished! I managed to find [a way to do it](https://github.com/haskell/containers/pull/713). Unfortunately, it's a little too complicated for me to understand everything going on, and the code is ... rather opaque. I will upvote and accept a good _explanation_ of what I've written, and will also happily accept suggestions for clarity improvements and comments on GitHub.
### Update 2
Many thanks to [Li-Yao Xia](https://stackoverflow.com/users/6863749/li-yao-xia) and Bertram Felgenhauer for helping to clean up and document my draft code. It's now considerably less difficult to understand, and will appear in the next version of `containers`. It would still be nice to get an answer to close out this question.
|
https://stackoverflow.com/questions/60621991/how-can-be-implemented-optimally-for-sequences
|
[
"haskell",
"data-structures",
"applicative",
"finger-tree"
] | 22 | 2020-03-10T09:30:55 |
[
"@Li-yaoXia, something very similar goes on in the aptyMiddle function. The tricky bit here is being sure that the function constructed (when applying the size to your hypothetical version) creates a result with the required internal sharing.",
"@dfeuer I think I've reduced the problem to constructing a function whose type would look like this with dependent types: repeat :: Int -> a -> (exists (n :: Nat). Node^n a) (this existential is equivalent to what you get from throwing away the Digits fields in the definition of FingerTree), which I find awfully non-obvious to implement.",
"Aha! I think I've found a way to manage the complexity at a (very small) efficiency cost! The trick will be a stripped-down version of the type of rigid trees that, instead of having digits, just has numbers indicating how big those digits are. This is all the structural information we need, because replication can (I'm pretty sure) still work in redundant base 3! So we just need a function (or class method) to flesh out those rigid tree bits, and a slightly modified aptyMiddle to put it all together.",
"@moonGoose, by way of background, I'm the one who wrote the liftA2 implementation, and there are still parts of it that I don't understand myself. replicate was one of several extremely clever functions Louis Wasserman wrote; it's simpler than his tails but still pretty tricky.",
"@moonGoose, I hear you about trying to give more details, but I don't think it will help. This is most definitely not going to be a quick-and-easy question for anybody. Answering it will require a deep dive into Data.Sequence internals; explaining the performance analysis doesn't seem likely to reduce the amount of work or effort involved.",
"@moonGoose, yes, it is.",
"Is join $ replicate (length ys) <$> xs the same as 2?",
"Imo it would help everyone to explain a bit how you get those bounds, perhaps even a diagram of what the different implementations produce? Otherwise everyone who would like to think about the problem has to independently do the work to get to where you're starting from."
] | 8 |
Technology
| 0 |
381
|
stackoverflow
|
Metal crash upon adding SKSpriteNode to SKEffectNode
|
> -[MTLDebugRenderCommandEncoder setScissorRect:]:2028: failed assertion (rect.x(0) + rect.width(1080))(1080) must be <= 240
I am getting this crash when adding a simple `SKSpriteNode` to a `SKEffectNode` with the following code
SKSpriteNode *warpSprite = [SKSpriteNode spriteNodeWithImageNamed:@"art.scnassets/symbol.png"];
SKEffectNode *entryEffectsNode = [[SKEffectNode alloc] init];
[entryEffectsNode addChild:warpSprite];
[self addChild:entryEffectsNode];
I have not touched these nodes anywhere else in my project, when i change the sprite the value in (must be <=value) changes within the error.
Edit: I have replaced the sprite image with a simple `spriteNodeWithColor:Size:` and the (<=value) is always twice size of the sprite. Also it should be noted that the SKScene is being used as a overlay in a SceneKit scene.
I have created a seperate SKScene with the following code, which still results in the same error.
@implementation testScene
-(id)initWithSize:(CGSize)size {
if (self = [super initWithSize:size]) {
SKSpriteNode *testSprite = [SKSpriteNode spriteNodeWithColor:[SKColor purpleColor] size:CGSizeMake(100, 100)];
SKEffectNode *testEffect = [[SKEffectNode alloc] init];
[testEffect addChild:testSprite];
[self addChild:testEffect];
}
return self;
}
@end
Edit 2: I have just tested the above scene as an overlay on a default SceneKit Project and it crashes with the same error.
Edit 3: I have reproduced this using swift. Bug report summited to Apple.
|
https://stackoverflow.com/questions/42383395/metal-crash-upon-adding-skspritenode-to-skeffectnode
|
[
"ios",
"objective-c",
"sprite-kit",
"swift3",
"skeffectnode"
] | 21 | 2017-02-21T21:29:20 |
[
"What is the current status on your bug report?",
"Are you certain your initWithSize method is called on the main thread ? When using a SKScene as a SCNScene overlay, all SKNode operations must be run from the main thread.",
"Might be a bug. File one with Apple to be sure.",
"With this changed i still get the same error. What else would you need to know? I have just run the same code in a new project and it works as you would expect. I have replaced the sprite image with a simple spriteNodeWithColor:Size: and the (<=value) is always twice size of the sprite.",
"your rect is greater than what is supported, [SKSpriteNode spriteNodeWithImageNamed:@\"art.scnassets/symbol.png\"] should be [SKSpriteNode spriteNodeWithImageNamed:@\"symbol\"] so that it can properly handle retina graphics, Other than this, we would need to know more"
] | 5 |
Technology
| 0 |
382
|
stackoverflow
|
Creating Spark Objects from JPEG and using spark_apply() on a non-translated function
|
Typically when one wants to use `sparklyr` on a custom function (_i.e._ [**non-translated functions) they place them within `spark_apply()`](https://sparkfromr.com/non-translated-functions-with-spark-apply.html). However, I've only encountered examples where a _single_ local data frame is either `copy_to()` or `spark_read_csv()` to a remote data source and then used `spark_apply()` on it. An example, for illustrative purposes only:
library(sparklyr)
sc <- spark_connect(master = "local")
n_sim = 100
iris_samps <- iris %>% dplyr::filter(Species == "virginica") %>%
sapply(rep.int, times=n_sim) %>% cbind(replicate = rep(1:n_sim, each = 50)) %>%
data.frame() %>%
dplyr::group_by(replicate) %>%
dplyr::sample_n(50, replace = TRUE)
iris_samps_tbl <- copy_to(sc, iris_samps)
iris_samps_tbl %>%
spark_apply(function(x) {mean(x$Petal_Length)},
group_by = "replicate") %>%
ggplot(aes(x = result)) + geom_histogram(bins = 20) + ggtitle("Histogram of 100 Bootstrapped Means using sparklyr")
It seems like it would be therefore possible to use this on any range of non-translated functions coming from `CRAN` or `Bioconductor` packages as long as the data resides in a Spark Object.
I've came up with a specific problem for `.jpeg` images as I read that [`SparkR` can load compressed image (`.jpeg`, `.png`, _etc._) into raw image representation via `ImageIO` in Java library](https://spark.apache.org/docs/latest/ml-datasource) \- it seems possible that `sparklyr` could do this as well.
`RsimMosaic::composeMosaicFromImageRandom(inputImage, outputImage, pathToTilesLibrary)` function takes an input image and a path to tiles used to create a photo mosaic and outputs an image ([example here](https://rviews.rstudio.com/2020/02/13/photo-mosaics-in-r/)).
If this function only took one image and I knew how to turn it into spark object I might imagine the command would look like: `composeMosaicFromImageRandom(inputImage, outputImage, spark_obj)`. However, this function is taking a path to 30,000 images.
How would one create 30,000 Spark Objects from the path to these tiles (`.jpegs`) and then use this function?
If the underlying code would actually need to be modified I've used `jimhester/lookup` to provide the source code:
function (originalImageFileName, outputImageFileName, imagesToUseInMosaic,
useGradients = FALSE, removeTiles = TRUE, fracLibSizeThreshold = 0.7,
repFracSize = 0.25, verbose = TRUE)
{
if (verbose) {
cat(paste("\n ------------------------------------------------ \n"))
cat(paste(" R Simple Mosaic composer - random version \n"))
cat(paste(" ------------------------------------------------ \n\n"))
}
if (verbose) {
cat(paste(" Creating the library... \n"))
}
libForMosaicFull <- createLibraryIndexDataFrame(imagesToUseInMosaic,
saveLibraryIndex = F, useGradients = useGradients)
libForMosaic <- libForMosaicFull
filenameArray <- list.files(imagesToUseInMosaic, full.names = TRUE)
originalImage <- jpeg::readJPEG(filenameArray[1])
xTileSize <- dim(originalImage[, , 1])[1]
yTileSize <- dim(originalImage[, , 1])[2]
if (verbose) {
cat(paste(" -- Tiles in the Library : ", length(libForMosaic[,
1]), "\n"))
cat(paste(" -- Tile dimensions : ", xTileSize, " x ",
yTileSize, "\n"))
}
if (verbose) {
cat(paste("\n"))
cat(paste(" Reading the original image... \n"))
}
originalImage <- jpeg::readJPEG(originalImageFileName)
xOrigImgSize <- dim(originalImage[, , 1])[1]
yOrigImgSize <- dim(originalImage[, , 1])[2]
if (verbose) {
cat(paste(" -- Original image dimensions : ", xOrigImgSize,
" x ", yOrigImgSize, "\n"))
cat(paste(" -- Output image dimensions : ", ((xOrigImgSize -
2) * xTileSize), " x ", ((yOrigImgSize - 2) * yTileSize),
"\n"))
}
if (verbose) {
cat(paste("\n"))
cat(paste(" Computing the mosaic... \n"))
}
outputImage <- array(dim = c(((xOrigImgSize - 2) * xTileSize),
((yOrigImgSize - 2) * yTileSize), 3))
removedList <- c()
l <- 1
pCoord <- matrix(nrow = ((xOrigImgSize - 2) * (yOrigImgSize -
2)), ncol = 2)
for (i in 2:(xOrigImgSize - 1)) {
for (j in 2:(yOrigImgSize - 1)) {
pCoord[l, 1] <- i
pCoord[l, 2] <- j
l <- l + 1
}
}
npixels <- length(pCoord[, 1])
for (i in 1:npixels) {
idx <- round(runif(1, 1, length(pCoord[, 1])))
pixelRGBandNeigArray <- computeStatisticalQuantitiesPixel(pCoord[idx,
1], pCoord[idx, 2], originalImage, useGradients)
tileFilename <- getCloseMatch(pixelRGBandNeigArray,
libForMosaic)
startI <- (pCoord[idx, 1] - 2) * xTileSize + 1
startJ <- (pCoord[idx, 2] - 2) * yTileSize + 1
outputImage[startI:(startI + xTileSize - 1), startJ:(startJ +
yTileSize - 1), ] <- jpeg::readJPEG(tileFilename)
if (removeTiles) {
libForMosaic <- removeTile(tileFilename, libForMosaic)
removedList <- c(removedList, tileFilename)
if (length(libForMosaic[, 1]) < (fracLibSizeThreshold *
length(libForMosaicFull[, 1]))) {
idxs <- runif(round(0.25 * length(libForMosaicFull[,
1])), 1, length(removedList))
for (ii in 1:length(idxs)) {
libForMosaic <- addBackTile(removedList[idxs[ii]],
libForMosaic, libForMosaicFull)
}
removedList <- removedList[-idxs]
}
}
if (length(pCoord[, 1]) > 2) {
pCoord <- pCoord[-idx, ]
}
}
if (verbose) {
cat(paste("\n"))
cat(paste(" Done!\n\n"))
}
jpeg::writeJPEG(outputImage, outputImageFileName)
}
_Please Note:_ My first attempt to speed up this code was 1) using `profvis` to find the bottlenecks (_i.e._ the for-loops) 2) use the `foreach` package on for-loops. This resulted in slower code which suggested I was parallelising at too low of a level. As I understand `sparklyr` is more about distributing the computing than about parallelizing it so perhaps this could work.
|
https://stackoverflow.com/questions/60743764/creating-spark-objects-from-jpeg-and-using-spark-apply-on-a-non-translated-fun
|
[
"r",
"binary",
"jpeg",
"sparklyr"
] | 18 | 2020-03-18T09:33:27 |
[
"Have you successfully used spark_apply() on a single .jpeg? The help says that you need to provide >An object (usually a spark_tbl) coercable to a Spark DataFrame.",
"I would like to use sparklyr on non-translated functions, specifically for composeMosaicFromImageRandom().",
"A long post; what exactly are you trying to achieve?"
] | 3 |
Technology
| 0 |
383
|
stackoverflow
|
Detect ring/silent switch position change
|
I'm working on an app for which I would like to:
1. respect the ring/silent switch when playing audio, and
2. display an icon indicating that sound is muted when the ring/silent switch is set to silent.
Requirement 1 is easy: I'm using [`AVAudioSessionSoloAmbient`](https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVAudioSession_ClassReference/Reference/Reference.html#//apple_ref/doc/constant_group/Audio_Session_Categories) as my app's audio session category, so that my audio session will mute itself when the ring/silent switch is off.
Requirement 2 seems considerably harder, because I need some sort of callback, notification, or KVO that will allow me to monitor the position of the switch, but Apple has made it clear that it is unwilling to offer an officially exposed way of doing this. That said, if I can find a nonintrusive way to monitor the switch's position, even one that is technically prohibited (like, say, an internal `NSNotification`), I would be willing to run it by Apple.
Further, I would prefer not to implement some of the polling solutions I've found elsewhere. See the Related Questions section for an example.
* * *
### What I've Learned (aka What Doesn't Work)
* In iOS versions 4 and 5, at least, there was a trick that could be used to get the switch's position by [watching the route property of the current audio session](https://stackoverflow.com/questions/6901363/detecting-the-iphones-ring-silent-mute-switch-using-avaudioplayer-not-worki). Besides being deprecated by the `AVAudioSession` class, I can confirm that this trick is no longer an option. The current route, both as reported by the C functions comprising the deprecated `Audio Session` API and the current `AVAudioSession` class does not change when the ring/silent switch is toggled.
* [`AVSystemController` is an internal class](https://github.com/nst/iOS-Runtime-Headers/blob/7ef0330f961248b9021e59e12aa3182440194817/PrivateFrameworks/Celestial.framework/AVSystemController.h) that seems to have a lot of promise. Invoking `- (BOOL)toggleActiveCategoryMuted` on the `sharedAVSystemController` does indeed mute my app's audio. Further, the shared singleton posts an `AVSystemController_SystemVolumeDidChangeNotification` notification when the system volume is changed via the volume buttons. Unfortunately, this notification is not posted in response to changes to the ring/silent switch (though [this dubiously attributed source says it should](http://cocoadev.com/AVSystemController)).
* As far as I can tell, there are _no_ `NSNotification`s posted by _any_ object in response to ring/silent switch position changes. I came to this conclusion after adding myself as an observer to all notifications in the default center:
[[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(handleNotification:) name:nil object:nil];
and then toggling the ring/silent switch. Nothing.
* The `AVSystemController` class has a promising method with the signature:
- (BOOL)getActiveCategoryMuted:(BOOL*)arg1;
However, there are two problems with this:
1. Neither the return value nor the `BOOL` pointed to by `arg1` ever seem to change in response to toggling the ring/silent switch.
2. Because of the method signature, this method is not (so far as I understand) a candidate for KVO.
****
* I suspect that some object sends some other object(s) a [`GSEventRef`](https://github.com/kennytm/iphone-private-frameworks/blob/master/GraphicsServices/GSEvent.h) when the mute switch is changed, because I see the following in the declaration for event types:
kGSEventRingerOff = 1012,
kGSEventRingerOn = 1013,
However, I'm pretty sure I can't intercept those messages, and even if I could, that would be a bit more than "a little" intrusive.
* * *
### Why I Believe This Is Possible
Put simply: the Instagram app exhibits essentially this behavior. When you watch a video, it respects the ring/silent switch's setting, but displays an icon when the switch is off. The icon disappears and reappears so immediately after moving the switch that I assume it must be event-based, not polling.
* * *
### Related Questions
* [This question dates back to iOS 4, and uses the methods that I mentioned in my first bullet above.](https://stackoverflow.com/questions/6901363/detecting-the-iphones-ring-silent-mute-switch-using-avaudioplayer-not-worki)
* [This question is very similar to the one above.](https://stackoverflow.com/questions/7798891/detect-silent-mode-in-ios5?lq=1)
* [This question is (much) more current, asking about iOS 7.](https://stackoverflow.com/questions/20992452/detect-silent-mode-in-ios-7) However, because I'm willing to accept a minimally intrusive breaking of the private-API rules, I would contend that this is a different question from my own.
* [This _answer_ suggests using a polling method that I would strongly prefer to avoid.](https://stackoverflow.com/a/19013671/1292061)
|
https://stackoverflow.com/questions/24145386/detect-ring-silent-switch-position-change
|
[
"ios",
"iphone",
"audio",
"avaudioplayer",
"ios7.1"
] | 17 | 2014-06-10T08:47:51 |
[
"The link above looks dead. The code above is hosted at: github.com/moshegottlieb/SoundSwitch (via stackoverflow.com/questions/20992452/…)",
"Instagram reportedly uses this solution, which is not pretty: sharkfood.com/content/Developers/content/Sound%20Switch",
"@PeterRobert No, I haven't. I ended up carefully implementing a solution by polling. It wasn't pretty.",
"Have you found any event based solution?"
] | 4 |
Technology
| 0 |
384
|
stackoverflow
|
Azure App Service load balancing settings
|
ARM template for Azure App Service has setting to configure load balancing algorithm - loadBalancing. According to [documentation](https://learn.microsoft.com/en-us/azure/templates/microsoft.web/sites#siteconfig-object) it's available through SiteConfig object and can have following values: WeightedRoundRobin, LeastRequests, LeastResponseTime, WeightedTotalTraffic, RequestHash.
We performed some testing with Standard S1 app service plan with two instances. First instance was responding to all request with no delay, second instance was responding to all requests with 3 seconds delay, ARR affinity was turned off.
Test showed that all settings perform the same - after some ramp up time all requests spread evenly between two instances. It was not expected at least for LeastResponseTime, which intuitively suppose to direct more traffic to first instance (with low response time).
So the questions is, does this setting even work? And if it does, in what app service configuration it's respected?
|
https://stackoverflow.com/questions/52492410/azure-app-service-load-balancing-settings
|
[
"azure",
"azure-web-app-service",
"azure-load-balancer"
] | 12 | 2018-09-25T00:03:55 |
[
"The App Service Plan level might also play a role. In some cases, certain features or their behaviors might differ based on whether you're on a Standard, Premium, or Isolated plan. Although documentation doesn't typically specify this, service level might impact the sophistication of load balancing features.",
"The specific implementation of these algorithms by Azure might not exactly match their traditional definitions. For example, LeastResponseTime might require a significant difference in response times or a larger sample size to alter the routing decisions noticeably.",
"You could take a look at this case. The scale-out itself is the load balance.",
"Just a thought, have you tried to verify it from a different region. This latency tool may be helpful azurespeed.com",
"Azure App Service uses internal load balancer that is not exposed and not configurable directly (except misterios loadBalancing setting in ARM). So there is no way to change the weights.",
"For Weighted round robin, you can set the weight 5 for instance1, and weight 1 for instance2, what is the result? and you can refer to these load balancing algorithm description.",
"Azure app service has setting inside ARM template called \"loadBalancing\". Setting name and supported values suggest that it should somehow specify how requests are distributed between instances. However we were not able to see any difference using different values for \"loadBalancing\" setting for our test case. 50% of request was server by first instance, 50% of requests were server by second instance, for all supported setting value values: WeightedRoundRobin, LeastRequests, LeastResponseTime, WeightedTotalTraffic, RequestHash.",
"We used Netling to generate lot of HTTP GET requests to app service. App service was running simple ASP.NET MVC application that respond HTTP 200 to all get requests with \"OK\" in response body. App was configured in such a way that first instance was responding immediately (response time below 100 ms) and second instance was adding 3 second delay. We tried to simulate case when one of the servers experiencing high load with this test.",
"Not sure your question, could you describe what you have done in your test? or what is load balancing setting? and what do you expect in the result?"
] | 9 |
Technology
| 0 |
385
|
stackoverflow
|
When malloc_trim is (was) called automatically from free in glibc's ptmalloc or dlmalloc?
|
There is [incorrectly documented](https://stackoverflow.com/questions/28612438/can-malloc-trim-release-memory-from-the-middle-of-the-heap/42273711) function `malloc_trim` in glibc malloc (ptmalloc2), added in 1995 by Doug Lea and Wolfram Gloger (dlmalloc 2.5.4).
In glibc this function can return some memory freed by application back to operating system using negative sbrk for heap trimming, `madvise(...MADV_DONTNEED)` for unused pages in the middle of the heaps (this [feature is in `malloc_trim` since 2007 - glibc 2.9](https://sourceware.org/git/?p=glibc.git;a=commit;f=malloc/malloc.c;h=68631c8eb92ff38d9da1ae34f6aa048539b199cc), but not in `systrim`), and probably not by trimming additional thread arenas (which are not sbrk-allocated by mmaped as separate heaps).
Such function can be very useful for some long-running C++ daemons, doing a lot of threads with very high number of mixed size allocations, both tiny, small, middle and large in required size. These daemons may have almost constant amount of live (allocated by malloc and not yet freed) memory, but at the same time grow in RSS (physical memory consumption) over time. But not every daemon can call this function, and not every author of such daemons know that he should call this function periodically.
Man page of `malloc_trim` by Kerrisk <http://man7.org/linux/man-pages/man3/malloc_trim.3.html> (2012) says in Notes about automatic calls to `malloc_trim` from other parts of glibc malloc:
> This function is automatically called by free(3) in certain circumstances.
(Also in <http://www.linuxjournal.com/node/6390/print> Advanced Memory Allocation, 2003 - _Automatic trimming is done inside the free() function by calling`memory_trim()`_)
This note is probably from source code of malloc/malloc.c which mentions in several places that `malloc_trim` is called automatically, probably from the `free()`:
736 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
737 to keep before releasing **via malloc_trim in free().**
809 * When malloc_trim is called automatically from free(),
810 it is used as the `pad' argument.
2722 systrim ... is also called by the public malloc_trim routine.
But according to grep for `malloc_trim` in glibc's malloc: [http://code.metager.de/source/search?q=malloc_trim&path=%2Fgnu%2Fglibc%2Fmalloc%2F&project=gnu](http://code.metager.de/source/search?q=malloc_trim&path=%2Fgnu%2Fglibc%2Fmalloc%2F&project=gnu) there are: declaration, definition and two calls of it from `tst-trim1.c` (test, not part of malloc). Same result for whole glibc grep (additionally it was listed in abilists). Actual implementation of `malloc_trim` is `mtrim()` but it is called only from `__malloc_trim()` of malloc.c.
So the question is: When `malloc_trim` or its internal implementation (`mtrim`/`mTRIm`) is called by glibc, is it called from `free` or `malloc` or `malloc_consolidate`, or any other function?
If this call is lost in current version of glibc, was it there in any earlier version of glibc, in any version of ptmalloc2, or in original dlmalloc code (<http://g.oswego.edu/dl/html/malloc.html>)? When and why it was removed? (What is the difference between `systrim`/`sys_trim` and `malloc_trim`?)
|
https://stackoverflow.com/questions/42283222/when-malloc-trim-is-was-called-automatically-from-free-in-glibcs-ptmalloc-or
|
[
"malloc",
"heap-memory",
"glibc"
] | 12 | 2017-02-16T11:44:26 |
[
"It loos like it only runs automatically when freeing a large space: github.com/bminor/glibc/blob/…. Unfortunately the threshold for \"large\" is not a tunable.",
"A very good question. I find this method to test malloc_trim() on running apps: notes.secretsauce.net/notes/2016/04/…",
"dlmalloc versions: g.oswego.edu/pub/misc; glibc git github.com/bminor/glibc/blob/master/malloc/malloc.c; version of malloc in glibc with main_trim: github.com/bminor/glibc/blob/…. Example of RSS memory grow: bugs.python.org/issue11849 - message 133929"
] | 3 |
Technology
| 0 |
386
|
stackoverflow
|
Connman without any user interaction
|
I'm trying to use Connman to manage the WiFi connection of my embedded system because it handles automagically any type of protection.
In interactive mode it's very simple:
1. connmanctl
2. agent on
3. scan wifi
4. services
5. connect
6. enter password if requested
On my system, the user enters the WiFi credentials (SSID, password) using a remote (web) application. Then I would use this information to setup connman using a script.
The goal is to avoid the user to select which type of protection is going to setup. I mean, most users just enter SSID/password but they don't know if it is a WPA-PSK or WEP connection.
I'm reading throught the documentation, but I'm not sure which is the correct approach:
* a config file: <http://git.kernel.org/cgit/network/connman/connman.git/tree/doc/config-format.txt>
but as far as I understand I need to specify the type of the security:
> Security: The security type of the network. Possible values are 'psk' (WPA/WPA2 PSK), 'ieee8021x' (WPA EAP), 'none' and 'wep'. When not set, the default value is 'ieee8021x' if an EAP type is configured, 'psk' if a passphrase is present and 'none' otherwise.
It seems 'wep' is not handled if the field is omitted.
* dbus-api: <http://git.kernel.org/cgit/network/connman/connman.git/tree/doc/manager-api.txt>
Here I understand it needs an 'agent' to feed the passphrase, thus I'm afraid I cannot send it programmatically.
Do you have any recommendation about?
|
https://stackoverflow.com/questions/37230288/connman-without-any-user-interaction
|
[
"connection",
"wifi",
"agent",
"connman"
] | 12 | 2016-05-14T11:17:00 |
[
"I guess connman has for each session a directory in /var/lib/connman/. So we have to connect manually to a new wifi, create a session and then connman connects to saved wifi automatically. But I am not sure - just guessing when observing the behavior."
] | 1 |
Technology
| 0 |
387
|
stackoverflow
|
Is integer vectorization accuracy / precision of integer division CPU-dependent?
|
I tried to vectorize the premultiplication of 64-bit colors of 16-bit integer ARGB channels.
I quickly realized that due to lack of accelerated integer division support I need to convert my values to `float` and use some SSE2/SSE4.1 intrinsics explicitly for the best performance. Still, I wanted to leave the non-specific generic version as a fallback solution (I know that it's currently slower than some vanilla operations but it would provide future compatibility for possible improvements).
However, the results are incorrect on my machine.
A very minimal repro:
// Test color with 50% alpha
(ushort A, ushort R, ushort G, ushort B) c = (0x8000, 0xFFFF, 0xFFFF, 0xFFFF);
// Minimal version of the fallback logic if HW intrinsics cannot be used:
Vector128<uint> v = Vector128.Create(c.R, c.G, c.B, 0u);
v = v * c.A / Vector128.Create(0xFFFFu);
var cPre = (c.A, (ushort)v[0], (ushort)v[1], (ushort)v[2]);
// Original color:
Console.WriteLine(c); // prints (32768, 65535, 65535, 65535)
// Expected premultiplied color: (32768, 32768, 32768, 32768)
Console.WriteLine(cPre); // prints (32768, 32769, 32769, 32769)
I tried to determine what instructions are emitted causing the inaccuracy but I was really surprised to see that in SharpLab the results are [correct](https://sharplab.io/#v2:EYLgtghglgdgNAFxAJwK4wD4AEBMBGAWACgsAGAAizwDoAldBKMAU2oEkYFlYBnKAYx4BuYsQAUqHgAsA9sgTkAgnHKTZ88rRVq5CgOLbpu8gCEAlOX7kAvOTGkAHgA5SrlY4BiXj+4fef5J7eZiJExABqzPwIcng4TgA8qLAIAHzkAG425JHRsfHUAMLIzBAIzGL8dCpVBpbUJu6oIcRZtlkAVPWK5AD0OVExyHFORSVlFUFezaEZEMiWAAol2ZXUynY68mYZANqkALoqEkbbe3hHm6cIO7s4By1hJHgAnJWPVG/8y8yPQA). On the other hand, the issue is [reproducible](https://dotnetfiddle.net/4AMEZS) in .NET Fiddle.
Is it something that's expected on some platforms or should I report it in the runtime repo as a bug?
* * *
### Update
Nevermind, this is clearly a bug. Using other values cause totally wrong results:
using System;
using System.Numerics;
using System.Runtime.Intrinsics;
(ushort A, ushort R, ushort G, ushort B) c = (32768, 65535, 32768, 16384);
Vector128<uint> v1 = Vector128.Create(c.R, c.G, c.B, 0u);
v1 = v1 * c.A / Vector128.Create(0xFFFFu);
// prints <32769, 49152, 57344, 0> instead of <32768, 16384, 8192, 0>
Console.WriteLine(v1);
// Also for the older Vector<T>
Span<uint> span = stackalloc uint[Vector<uint>.Count];
span[0] = c.R;
span[1] = c.G;
span[2] = c.B;
Vector<uint> v2 = new Vector<uint>(span) * c.A / new Vector<uint>(0xFFFF);
// prints <32769, 49152, 57344, 0, 0, 0, 0, 0> on my machine
Console.WriteLine(v2);
In the end I realized that the issue was at the multiplication: if I replace `* c.A` to the constant expression `* 32768`, then the result is correct. For some reason the `ushort` value is not correctly extracted/masked(?) out from the packed field. Even `Vector.Create` is affected:
(ushort A, ushort R, ushort G, ushort B) c = (32768, 65535, 32768, 16384);
Console.WriteLine(Vector128.Create((int)c.A)); // -32768
Console.WriteLine(Vector128.Create((int)32768)); // 32768
Console.WriteLine(Vector128.Create((int)c.A, (int)c.A, (int)c.A, (int)c.A)); // 32768
* * *
### Update 2
In the end filed an [issue](https://github.com/dotnet/runtime/issues/83387) in the runtime repo
|
https://stackoverflow.com/questions/75732627/is-integer-vectorization-accuracy-precision-of-integer-division-cpu-dependent
|
[
"c#",
"vectorization",
"precision",
"simd",
"auto-vectorization"
] | 11 | 2023-03-14T04:37:29 |
[
"@GyörgyKőszeg: The compiler output I linked is for vec / 0xffffu. The results are exact, using a multiplicative inverse like I said, same as compilers do for scalar uint / constant. Why does GCC use multiplication by a strange number in implementing integer division? Did you think I meant v >> 16? Oh, you probably thought I meant doing the * part of you expression with an integer multiply; I was talking about using the high half of integer multiply as part of the division.",
"Alright, this must be a bug, using some other values end up in total off results. I will update the post soon.",
"@PeterCordes: yes, bit shifting is faster with vanilla code than division but the results are not exactly the same. And the floating point version now works well, it's just about the issue I noticed.",
"If you were using FP, it would probably perform better to multiply by 1.0f/0xffff (or however you write a float constant in C#), although that can't be represented exactly. But with about 23 bits of precision, might still give the correct 16-bit integer after rounding or truncating to 16-bit.",
"I don't see anything in the software fallback path which would cause this... I'd be tempted to either raise this in the dotnet/runtime repo, or ask in #allow-unsafe-blocks on discord (lots of JIT people hang out there, and they'll tell you if you need to open an issue)",
"This can be done with integer multiply and shifts, although it's not super efficient since it needs the high half of a 32x32 => 64-bit multiply, and SSE2 / AVX only gives you that with pmuludq which gives you the full 64-bit results. So only half the input elements per vector. godbolt.org/z/7E9P9aWMh shows GCC and clang using a multiplicative inverse for dividing a vector of 4x uint32_t by 0xffffu. This is exact integer division, no FP involved at any point.",
"In the meantime I realized that SharpLab also produces the wrong result in Debug mode. Strange, because on my computer both Debug and Release builds are incorrect. So I start to believe this is a bug after all."
] | 7 |
Technology
| 0 |
388
|
stackoverflow
|
Is there something like a continuation Arrow transformer?
|
The [`ContT`](https://hackage.haskell.org/package/mtl-2.2.1/docs/Control-Monad-Cont.html#v:ContT) monad transformer has a interesting property: If there is a `* -> *` type such as `Set`, that has well-defined monadic operations, but can't have a `Monad` instance due to some constraints (here `Ord a`), it's possible to wrap it in `ContT` (`ContT r Set`) to get a monad instance, and defer the constraints outside it, like when we inject `Set` into `ContT r Set`. See [Constructing efficient monad instances on `Set` using the continuation monad](https://stackoverflow.com/q/12183656/1333025).
Is there something similar for arrows? An [arrow transformer](https://hackage.haskell.org/package/arrows-0.4.4.1/docs/Control-Arrow-Transformer.html#t:ArrowTransformer) that'd allow to wrap an "almost arrow" into it, getting a valid `Arrow` instance, and defer problematic constraints to the part where we inject the "almost arrow" into it?
For example, if we had a type `AlmostArrow :: * -> * -> *` for which we'd have the usual `Arrow` operations, but with constraints, such as
arr' :: (Ord a, Ord b) => (a -> b) -> AlmostArrow a b
(>>>') :: (Ord a, Ord b, Ord c) => AlmostArrow a b -> AlmostArrow b c -> AlmostArrow a c
As a bonus, if yes, is there some nifty, generic category-theory way how to derive both `ContT` and such an arrow transformer?
|
https://stackoverflow.com/questions/42873249/is-there-something-like-a-continuation-arrow-transformer
|
[
"haskell",
"monad-transformers",
"continuations",
"category-theory",
"arrow-abstraction"
] | 11 | 2017-03-18T03:37:02 |
[
"A free arrow would surely do the trick. Then you just have to do some junk to make it efficient."
] | 1 |
Technology
| 0 |
389
|
stackoverflow
|
Multiset domination algorithm
|
Let us say that a multiset M _dominates_ another multiset N if each element in N occurs at least as many times in M.
Given a target multiset M and an integer k>0, I'd like to find a list, L, of size-k multisets whose sum dominates M. I'd like this list to be of small cost, where my cost function is of the form:
cost = c*m + n
where c is a constant, m is the number of multisets in L, and n is the number of _distinct_ multisets in L.
How can I do this? An efficient algorithm to find the optimal solution would be ideal.
The problem comes from trying to fulfill a customer's order for printing pages with a specialized block-printer that prints k pages at a time. Setting up the block-printer to print a particular template of k pages is costly, but once a template is initialized, printing with it is cheap. The target multiset M represents the customer's order, and the n distinct multisets of the list L represent n distinct k-page templates.
In my particular application, M typically has >30 elements whose multiplicities are in the range [10^4, 10^6]. The value of k is 15, and c is approximately 10^-5.
|
https://stackoverflow.com/questions/36459635/multiset-domination-algorithm
|
[
"algorithm",
"multiset"
] | 11 | 2016-04-06T11:36:37 |
[
"Well given that k is 15 and your example is worst case for my approach it can be bearable IMO. At least it gives you a fixed cap, with the \"one of a kind\" approach that would be optimal in your example, if distinct elements aren't multiples of k or multiplicities vary a lot (they have a 10^2 range right?) you're back having to find a way of grouping them. You can always take some initial time calculating the number of distinct elements and their multiplicity, then based on that remove as many as you can with you \"horizontal\" approach and clean up the rest with \"vertical\" approach.",
"@CarloMoretti Consider the case when M consists of a*k distinct elements, each with identical multiplicity b. Clearly the optimal cost here is a(cb+1). Your solution, if I understand it correctly, does no better than a(cb+k). This performs nearly k times worse than optimal if k>>cb.",
"Since multiplicities are so high why not just make sets of k identical elements? Let's say M has p distinct elements, you'll have at first n = p (without counting the reminder multiplicities). Since k is 15 I guess all combined remaining items will be r < 15*p which if you combine them at random will result in < p k-sets. So at the end you'll have n < 2p. So m = |M| / 15 if p is [10, 10^2] then |M| is in [10^5, 10^8] and m is in [10^4, 10^7] so c*m would be in [10^-1, 10^2] which is fairly comparable to p so roughly cost < 3p",
"This seems like a really interesting question! It does seem like it might be NP-hard to find the exact optimum, but I would be very interested in hearing some sample problem instances to see what I can get.",
"@DavidEisenstat btw, I just deleted an earlier comment saying I could express the optimization problem with linear constraints, but I realized I was wrong. The only formulation I know of currently has quadratic constraints.",
"@piotrekg2 My friend has so far only given me one problem instance, without a corresponding human solution. I am pushing him to make more available.",
"@dshin I know about the magic of commercial solvers, but I'd be a little surprised if you had a formulation that they would work well on.",
"@dshin Did you try to compare combinations computed by humans with combinations computed by a greedy algorithm?",
"@DavidEisenstat I don't think integer quadratic programs are optimally solvable in polynomial time in general, but that often doesn't stop commercial scientific solvers from doing a good job. Adding a restriction of not sharing pages between templates hurts your ability to minimize cost.",
"(Fixed prev comment) This is similar to the NP-hard Cutting Stock problem. I've come up with a deterministic algorithm that will take just O(k|M|^2) total time (with |M| being the number of elements in M, not their sum) to find |M| solutions: one for each possible number i of distinct templates. The solution for i distinct templates exactly minimises the maximum copy count of any template under the constraint that it is possible to order templates, and pages, so that each page spans a contiguous block of templates. Taking the best of these |M| solutions should give a good quality answer.",
"@j_random_hacker I was actually introduced to this problem by my friend - he told me that his company's current solution is to have a human try combinations by hand for hours until he finds one that seems ok. If you are serious about being paid, I can get you guys in touch. I'd imagine he'd want to see how the algorithm compares to the human solution (and to a simple greedy solution) on real life examples to put a dollar amount on the algorithm's value.",
"Interesting problem! Is the quadratic program actually solvable to optimality? Would it make sense to consider a variant where each type of page can belong to exactly one template?",
"@dwanderson One approach is to fix n and then formulate the optimization problem as an integer quadratic program, which can be solved with a scientific library. Then iterate over candidate values of n. I'm hoping for something nicer. I've tried a greedy-ish algorithm which works decently but is not optimal.",
"My guess is that finding the optimal solution is NP-complete, but greedy can find a solution fairly easily.",
"What have you tried? SO isn't a code-writing service. Which parts of the algorithm confuse you or are you having trouble with?"
] | 15 |
Technology
| 0 |
390
|
stackoverflow
|
Equality of int&'s in template parameters
|
Suppose we have the following program:
template <class T, T n1, T n2>
struct Probe {
static const int which = 1;
};
template <class T, T n>
struct Probe<T, n, n> {
static const int which = 2;
};
int i = 123;
const int myQuestion = Probe<int&, i, i>::which;
I am pretty sure, that `myQuestion` should be `2` regardless of the version of C++ standard, but compilers disagree upon it. MSVC and clang say that it is `2` until C++14, and `1` since C++17. See the [demo](https://godbolt.org/z/efETKzTo3). What is the truth?
My investigation so far:
* I have found one relevant sentence in the C++ standard. It was there in [C++11](https://timsong-cpp.github.io/cppwp/n3337/temp.type#1.6), [C++14](https://timsong-cpp.github.io/cppwp/n4140/temp.type#1.6), [C++17](https://timsong-cpp.github.io/cppwp/n4659/temp.type#1.6) and [C++20](https://timsong-cpp.github.io/cppwp/n4868/temp.type#2.7). It did not change.
* If you remove the parameter `T` from the example code, all compilers agree, that `myQuestion` is `2`. [Demo](https://godbolt.org/z/6cW8GGEod).
|
https://stackoverflow.com/questions/71022302/equality-of-ints-in-template-parameters
|
[
"c++",
"c++17",
"c++14",
"template-specialization",
"non-type-template-parameter"
] | 10 | 2022-02-07T08:54:16 |
[
"Ok, I don't have an issue with that. It is a duplicate, but it's not exactly a question that gets asked a lot, so it's not a big deal to leave it open. I wouldn't dispute the closure either, i.e. I'm fine not voting either way.",
"@cigien: It's not a good idea to mark my question as a duplicate of that long question. Even if my question is answered there, since it's so long, it would be difficult for the readers to extract information from that.",
"@AnoopRana - Accessing the object's value in a constant expression requires it to satisfy additional constraints so the value is usable (in that case it should be constexpr indeed). Referring to the object alone (like in the OP, on surface level) doesn't require anything from the value.",
"This is a duplicate of stackoverflow.com/questions/37369129 but can't be targetted because it doesn't have an upvoted/accepted answer.",
"@StoryTeller-UnslanderMonica Ok then shouldn't this work without error? As you said, here we have a reference ref to an object i with static storage duration. Then ref can be used as size of an array. Am i missing something?",
"@AnoopRana - timsong-cpp.github.io/cppwp/n4868/expr.const#11 should have the gist of it.",
"@StoryTeller-UnslanderMonica I think you're right[in which case i'll delete my comments] but i can't find the exact statement that you quoted: \"A reference to an object with static storage duration is permissable in constant expressions, even when the referent is not const\". Can you put a link here for me?",
"@AnoopRana - And you missed the point of mine. It already is valid in constant expressions.",
"@AnoopRana Same error with contexpr/const int&: godbolt.org/z/a5Mrj4M53",
"@AnoopRana - it's not ill-formed. For god's sake, it's a minimal reproducible example, check it before commenting. A reference to an object with static storage duration is permissable in constant expressions, even when the referent is not const.",
"Definitely an interesting Q. References are meant to be translucent in the type system. Not sure how that may interplay with template equivalence checks however.",
"Looks like the program is ill-formed since i is not a constant expression in your case. If you make it a constant expression, the program will no longer compile in any version.",
"First i should be a constant expression when passing as template argument. Add constexpr or const for it.",
"Shouldn't i be const at least?"
] | 14 |
Technology
| 0 |
391
|
stackoverflow
|
Can skew binomial heaps support efficient merge?
|
The skew binomial heaps described in Okasaki's _Purely Functional Data Structures_ support merge in worst-case `O(log (max (m,n)))` time, where `m` and `n` are the lengths of the queues being merged. This is worse than segmented binomial queues, which support it in worst-case `O(log (min (m,n)))` time, and lazy binomial queues, which support it in worst-case `O(log (max (m,n)))` time but `O(log (min (m,n)))` amortized time[*]. This seems to be inherent in the restriction that the skew binary number in the queue representation is in canonical form (only one 2, and only as the least significant nonzero digit). Would it be possible to relax this restriction somewhat to get more efficient merges? The basic challenge is that a 2 must not be allowed to cascade into another 2.
[*] I've also recently come up with a variant of scheduled binomial queues with the same worst-case bounds as segmented queues; that version is not yet fully implemented.
|
https://stackoverflow.com/questions/65051168/can-skew-binomial-heaps-support-efficient-merge
|
[
"haskell",
"data-structures",
"binomial-heap"
] | 10 | 2020-11-28T07:21:46 |
[] | 0 |
Technology
| 0 |
392
|
stackoverflow
|
Using ESM Modules with Coffeescript and Node.js
|
Ecmascript Modules are the future for packaging JS code, and both Node.js and Coffeescript support them. But I've had some trouble getting their support for ESM to work together.
The current stable Node (12.x) has ESM modules behind a flag (`--experimental-modules`). Coffeescript supports passing flags through to Node with `--nodejs`. So with a couple of files using ESM modules:
# a.coffee
import b from './b.coffee'
b()
# b.coffee
b = ->
console.log "Hello"
export default b
In theory, we can run this code with `npx coffee --nodejs --experimental-modules a.coffee`. In practice this raises an error:
13:24 $ npx coffee --nodejs --experimental-modules a.coffee
(node:8923) ExperimentalWarning: The ESM module loader is experimental.
(node:8923) Warning: To load an ES module, set "type": "module" in the package.json or use the .mjs extension.
/media/projects/coffeemodules/a.coffee:1
import b from './b.coffee';
^^^^^^
SyntaxError: Cannot use import statement outside a module
...
The error and docs say there are two ways to flag a file as containing an ESM module, one is to use the `mjs` extension (which isn't available to us here), and the other is to set `"type": "module"` in `package.json`, which also doesn't seem to work.
So: can it be done? Is there a way to get Coffeescript and Node.js ES modules to play together?
|
https://stackoverflow.com/questions/59312784/using-esm-modules-with-coffeescript-and-node-js
|
[
"node.js",
"coffeescript",
"es6-modules"
] | 10 | 2019-12-12T13:14:21 |
[] | 0 |
Technology
| 0 |
393
|
stackoverflow
|
MSDeploy not replacing encoded xml strings
|
In web.config I have:
<applicationSettings>
<App.Properties.Settings>
<setting name="ProfitConnectorToken" serializeAs="String" xdt:Transform="Replace" xdt:Locator="Match(name)">
<value>__ProfitConnectorToken__</value>
</setting>
</App.Properties.Settings>
In my parameters.xml:
<parameter name="ProfitConnectorToken" description="Description for ProfitConnectorToken" defaultvalue="__PROFITCONNECTORTOKEN__" tags="">
<parameterentry kind="XmlFile" scope="\\web.config$" match="/configuration/applicationSettings/App.Properties.Settings/setting[@name='ProfitConnectorToken']/value/text()" />
And in my SetParameters.xml:
<setParameter name="ProfitConnectorToken" value="<token><version>1</version><data>XXXXXXXXXXXXXXXXXXXXXXXXX</data></token>" />
But this value is not set when the web application is deployed. When I change my SetParameters.xml to:
<setParameter name="ProfitConnectorToken" value="TEST" />
It does work, so my XPath is correct. Why is the encoded xml value not set?
|
https://stackoverflow.com/questions/44433931/msdeploy-not-replacing-encoded-xml-strings
|
[
"asp.net",
"asp.net-mvc",
"web-config",
"msdeploy",
"webdeploy"
] | 10 | 2017-06-08T04:02:38 |
[
"Nope, if I do that the value isn't replaced at all, not even my \"TEST\" value which was working before.",
"Try removing /text() at the end of your XPath query?"
] | 2 |
Technology
| 0 |
394
|
stackoverflow
|
How to ignore warnings in headers that are part of a clang C++ module?
|
We're using clang with `-fmodule` and `-fcxx-module` to enable module support as documented at <http://clang.llvm.org/docs/Modules.html>. We're already seeing a significant improvement in build times by defining module maps for our core libraries.
However, we have some library headers that use pragmas to disable warnings for certain lines, for example:
template <typename TFloat>
static bool exactlyEqual(TFloat lhs, TFloat rhs)
{
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wfloat-equal"
return lhs == rhs;
#pragma clang diagnostic pop
}
When this header is pulled in as a precompiled module, it seems clang's internal representation does not preserve the pragma information and the warning is still emitted. Since we treat warnings as errors this causes compilation to fail. Some might argue to just disable `float-equal` entirely, but we have a bunch of other cases with different warnings which we don't want to globally disable.
We're already using `-Wno-system-headers` and `-isystem` so that clients of libraries generally don't see warnings like this anyway (even without the pragma), but this doesn't seem to work when the header is imported as a module. In addition we still hit these warnings for code internal to the library which includes the header as a non-system header (i.e. without using `-isystem` / using double quotes), since module precompilation and importing also occurs here.
I've tried using `_Pragma(...)` instead of `#pragma` which didn't have any effect.
Is there some other way to conditionally ignore warnings in headers that come from precompiled clang modules?
**UPDATE:** I've put a sample project up on <https://github.com/MikeWeller/ClangModuleWarnings> which reproduces the problem
**UPDATE:** Seems the `[system]` module attribute will suppress all warnings. However this suppresses even warnings we want to see when building the library itself, and we don't want to make all our modules system modules. If we find a way to not use the library's module map when building the library itself this may be acceptable but we'd still like to pragma out certain warnings for non-system modules..
|
https://stackoverflow.com/questions/38889998/how-to-ignore-warnings-in-headers-that-are-part-of-a-clang-c-module
|
[
"c++",
"clang",
"clang++",
"llvm-clang"
] | 10 | 2016-08-11T00:30:06 |
[
"Seeing as this question was asked way back when in 2016, are you still having this issue? Have you asked on the Clang mailing list?"
] | 1 |
Technology
| 0 |
395
|
stackoverflow
|
iOS OpenGL Catch-22: OpenGL background rules and "app snapshot" for App Switcher
|
Like many developers, I have an app that uses OpenGL via a `UIView` subclass whose `layerClass:` method returns `[CAEAGLLayer class]`.
Note I am **not** using `GLKit` or `GLKView` or `GLKViewController`
When I click Home to put the app into the background, after `applicationDidEnterBackground`, iOS calls my view's `layoutSubviews` twice, with portrait and landscape sizes, trying to generate an "app snapshot" as explained here (see "prepare for the app snapshot"):
<https://developer.apple.com/library/ios/documentation/iPhone/Conceptual/iPhoneOSProgrammingGuide/StrategiesforHandlingAppStateTransitions/StrategiesforHandlingAppStateTransitions.html#//apple_ref/doc/uid/TP40007072-CH8-SW10>
How can this possibly work?
There seems to be a direct contradiction here with the very clear advice on this page (see "Background Apps May Not Execute Commands on the Graphics Hardware"):
<https://developer.apple.com/library/ios/documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/ImplementingaMultitasking-awareOpenGLESApplication/ImplementingaMultitasking-awareOpenGLESApplication.html#//apple_ref/doc/uid/TP40008793-CH5-SW1>
that we must not draw anything with OpenGL after `applicationDidEnterBackground`
If we don't draw, we cannot generate the snapshots. We must violate one rule or the other.
But we also want good snapshots in both orientations, so that when the user double-clicks home and goes to the App Switcher, they see reasonable snapshot images.
Even if I temporarily change my code to fully implement the `layoutSubviews` after `applicationDidEnterBackground` by creating an OpenGL surface and drawing (which, contrary to the Apple dox, does not crash), and then I double-click home and look at the snapshot in different orientations, only the snapshot for the orientation I was in before is correct. The other one is a super-ugly nasty re-scaling of the other snapshot. Apple seems to be going through the motions of taking snapshots, but not actually taking them.
I am seeing this behavior on iOS 9.3.2 on an iPad Mini. The behavior doesn't show on most/all iPhone devices since they don't support a landscape App Switcher.
**UPDATE:** the problem also happens, and happens much worse, when using the new iOS 9 "Slide Over" multitasking feature and switching the same app between being a normal fullscreen app vs. being an app slid over another app. iOS only seems to capture a snapshot of the last app size, so after using the app at 640px wide and then trying to use the App Switcher to get to the app fullscreen, we see a grotesque pixely out-of-proportion snapshot in the App Switcher and also during the first second of launch. There has got to be some way to fix this!
**UPDATE 2:** I have seen a few iOS apps, which I know to be OpenGL-only apps, where if you use them in portrait, then go back Home and rotate to landscape, then double-click Home, you see the portrait launch image rather than a horrible, distorted, out-of-proportion image like I am seeing. While I would prefer to render snapshot images, I would even be happy to see the launch image. But the option everyone mentions, `ignoreSnapshotOnNextApplicationLaunch`, **does not work** because it only affects what you see at actual app launch time, not what is seen in the App Switcher when you double-click Home, and for many on StackOverflow it actually didn't even work at all (not even at launch time).
How do we get around this Catch-22?
This StackOverflow thread (unlike me, the OP here uses GLKit but the symptom is the same):
[iOS OpenGL ES screen rotation while background apps bar visible](https://stackoverflow.com/questions/13954994/ios-opengl-es-screen-rotation-while-background-apps-bar-visible)
confirms that some OpenGL apps on iOS **are** able to have proper preview images in the Home double-press app switcher for both orientations. How do they do it?
How can I get proper snapshots shown in both orientations in the App Switcher?
Here is a log of `AppDelegate` (appdel), `ViewController` (eaglc) and `View` (eaglv) calls that come from iOS at the time that I click the Home button once to exit the app. You can see the attempts at snapshotting that come well after `didEnterBackground`:
+ 189.57ms appdel appWillResignActive
+ 0.74ms appdel appWillResignActive between_view_os_callbacks 0
+ 4.11ms appdel appWillResignActive between_view_os_callbacks 0 done
+ 0.82ms appdel appWillResignActive activation_changed
+ 1.50ms appdel appWillResignActive activation_changed done
+ 0.47ms appdel appWillResignActive between_view_os_callbacks 1
+ 2.68ms drawing rect [(144,1418)+(2,66)] (0 left)
+ 44.28ms swap_buffers glFlush()
+ 6.16ms swap_buffers presentRenderBuffer
+ 9.01ms appdel appWillResignActive between_view_os_callbacks 1 done
+ 0.61ms appdel save_state
..app saving data, no OpenGL here..
+ 0.49ms appdel save_state calling glFinish
+ 0.34ms appdel save_state done
+ 0.25ms appdel appWillResignActive done
+ 492.72ms appdel applicationDidEnterBackground
+ 0.56ms appdel save_state
..app saving data, no OpenGL here..
+ 0.65ms appdel save_state calling glFinish
+ 0.54ms appdel save_state done
+ 0.65ms eaglv let_go_of_frame_buffer_render_buffer
app drops OpenGL frame_buffer and render_buffer here
+ 1.10ms appdel applicationDidEnterBackground done
Now we are not supposed to do OpenGL, BUT...
+ 6.30ms eaglc supportedInterfaceOrientations
+ 5.74ms about_to_sleep between_view_os_callbacks
+ 1.30ms SKIPPING between_view_os_callbacks cuz app in background
+ 0.66ms about_to_sleep between_view_os_callbacks done
+ 135.85ms eaglc willRotateToInterfaceOrientation
+ 2.49ms appdel willChangeStatusBarFrame new=0,0 768x20
+ 3.21ms appdel didChangeStatusBarFrame old=0,0 1024x20
we get a portrait layoutSubviews....
+ 1.26ms eaglv layoutSubviews (initted=1, have_fbrb=0)
+ 1.80ms eaglv assure_frame_buffer_render_buffer
+ 0.95ms eaglv assure_fbrb scale ios=2 eaglv=2
+ 0.90ms eaglv assure_fbrb (frame=1536,2048)
+ 1.04ms eaglv assure_fbrb (layer frame=1536,2048)
+ 0.92ms eaglv assure_fbrb in bg: will make fbrb later
+ 0.96ms eaglv layoutSubviews done
+ 3.11ms eaglc didRotateFromInterfaceOrientation
+ 149.07ms eaglc willRotateToInterfaceOrientation
+ 1.99ms appdel willChangeStatusBarFrame new=0,0 1024x20
+ 2.35ms appdel didChangeStatusBarFrame old=0,0 768x20
then a landscape layoutSubviews...
+ 1.91ms eaglv layoutSubviews (initted=1, have_fbrb=0)
+ 1.09ms eaglv assure_frame_buffer_render_buffer
+ 0.91ms eaglv assure_fbrb scale ios=2 eaglv=2
+ 1.65ms eaglv assure_fbrb (frame=2048,1536)
+ 0.92ms eaglv assure_fbrb (layer frame=2048,1536)
+ 0.93ms eaglv assure_fbrb in bg: will make fbrb later
+ 0.83ms eaglv layoutSubviews done
+ 2.79ms eaglc didRotateFromInterfaceOrientation
and, adding insult to injury, we get this log message:
Snapshotting a view that has not been rendered results in an empty snapshot. Ensure your view has been rendered at least once before snapshotting or snapshot after screen updates.
|
https://stackoverflow.com/questions/37602075/ios-opengl-catch-22-opengl-background-rules-and-app-snapshot-for-app-switcher
|
[
"ios",
"objective-c",
"ipad",
"opengl-es",
"layoutsubviews"
] | 10 | 2016-06-02T14:04:01 |
[
"perhaps you can try to allow only portrait mode when entering background, so that it at least wouldn't generate distorted snapshot ... but i think this is a good question, and hope somebody else could offer some insights",
"I do update my UI before returning from applicationDidEnterBackground, however the problem is that the two layoutSubviews calls which represent my only chance to draw the state of the app in the other orientation (portrait or landscape, whichever one it is) both come AFTER applicationDidEnterBackground, when Apple says it is forbidden to draw. Catch-22! So even if Apple were using my app state that I leave after applicationDidEnterBackground, it's not possible that Apple will know how to draw the app in the other orientation. And so I see distorted ugliness in the App Switcher.",
"Are you updating your UI / drawing with GLES in layoutSubviews:? Apple states that you need to do that \"Before returning from your applicationDidEnterBackground: method\", which means you need to do UI updates in applicationDidEnterBackground:"
] | 3 |
Technology
| 0 |
396
|
stackoverflow
|
How to calculate the checksum in an XFA form
|
When you save an XFA form (XFA = XML Forms Architecture) using Adobe software, a checksum attribute is added to the form element. This checksum appears to be a SHA-1 digest, but it's unclear as to what is actually fed to the hash. Does anyone have any idea as to how this is generated? This value is needed by Adobe Acrobat to validate what's actually in the form's XML data, but when I create a hash of the XML that is being fed to the form, Adobe Acrobat doesn't accept it. This checksum attribute isn't documented in the XFA specification, so I would really appreciate it, if somebody could:
1. Confirm that the value is actually a hash created using the SHA-1 hashing algorithm?
2. Explain which data should be used to create this hash.
|
https://stackoverflow.com/questions/27470442/how-to-calculate-the-checksum-in-an-xfa-form
|
[
"xml",
"forms",
"pdf",
"hash",
"xfa"
] | 10 | 2014-12-14T06:46:23 |
[
"It seems like iText once had an implementation of this, but I can't find it in the more recent versions of their software: api.itextpdf.com/pdfXFA/java/2.0.2/index.html?com/itextpdf/tool/…",
"@BrunoLowagie I am working on populate data into \"form\" - xfa.org/schema/xfa-form/2.8. But it doesn't affect any. So, without checksum, we cannot do it?",
"@Setasign I don't have any answer yet. Foxit must have reverse engineered it...",
"Bruno, do you have any news on that issue? Or is it still in the hands of Adobe? Foxit, for example, is able to create this hash..."
] | 4 |
Technology
| 0 |
397
|
tex
|
Listings package and forcing inclusion of empty lines at end of line ranges?
|
By default, the listings package suppresses empty lines at the end of the file, and this behavior can be switched off with the option `showlines`.
However, listings also suppresses empty lines at the end of each range in a `linerange`. Consider the following Java source file:
public class Example implements StringHandler {
/**
* Prints the given string.
*
* @param s the given string
*/
@Override
public void handle(String s) {
System.out.println(s);
}
}
And this is the LaTeX source file `test.tex`:
\documentclass[a4paper,landscape]{slides}
\usepackage{color,listings,courier}
\lstset{language=Java,%
basicstyle=\ttfamily,%
numbers=left,%
numberstyle=\tiny,%
commentstyle=\color{blue}\itshape}
\begin{document}
% Suppress complete javadoc and @Override (to avoid cluttering the slide)
\begin{slide}
\lstinputlisting[linerange={1-2,9-13}]{Example.java}
\end{slide}
\end{document}
The expected output is:
public class Example implements StringHandler {
public void handle(String s) {
System.out.println(s);
}
}
But instead, I get
public class Example implements StringHandler {
public void handle(String s) {
System.out.println(s);
}
}
Line 2 (which is empty) is not included in the output, being at the end of a line range. (I am using listings v1.5; but v1.3 behaves similarly.)
It turns out that empty lines at the _beginning_ of a line range (consisting of more than one line) are included. But that is of no help in this situation.
How can I force the inclusion of these embedded empty lines? This is useful to make the listing more readable.
The options `showlines` and `emptylines` do not affect this behavior.
|
https://tex.stackexchange.com/questions/233445/listings-package-and-forcing-inclusion-of-empty-lines-at-end-of-line-ranges
|
[
"listings"
] | 7 | 2015-03-16T09:22:19 |
[
"@user2768: No, I never a solution.",
"Did you ever find a solution?"
] | 2 |
Technology
| 0 |
398
|
tex
|
Glossary style with tabularray
|
I am currently migrating from xltabular to tabularray. As part of that I want to adapt my custom glossary style for List of Symbols and List of Abbreviations (same style).
This is my old code:
% tables
\usepackage{booktabs}
\usepackage{multirow}
\usepackage{xltabular}
\usepackage{tabularray}
\addto\captionsngerman{
\DefTblrTemplate{contfoot-text}{default}{Fortsetzung auf der n\"achsten Seite}
\DefTblrTemplate{conthead-text}{default}{(fortgesetzt)}
}
% glossaries
\usepackage[
abbreviations,
nonumberlist,
record,
symbols
]{glossaries-extra}
\GlsXtrLoadResources[
src=glossaries,
not-match={entrytype=symbol}
]
\GlsXtrLoadResources[
selection=all,
src=glossaries,
type=symbols,
match={entrytype=symbol}
]
\newglossarystyle{customlong}{
\setglossarystyle{long}
\renewenvironment{theglossary}{\xltabular{\textwidth}{llX}}{\endxltabular}
\renewcommand{\glossentry}[2]{
\glsentryitem{##1}\textbf{\glstarget{##1}{\glossentryname{##1}}} & \multicolumn{2}{X}{\glossentrydesc{##1}}\\
}
\renewcommand{\subglossentry}[3]{
& \glssubentryitem{##2}\glstarget{##2}{\strut} & \glossentrydesc{##2}\\
}
\ifglsnogroupskip
\renewcommand*{\glsgroupskip}{}%
\else
\renewcommand*{\glsgroupskip}{\\}%
\fi
}
I would like to get rid of:
\usepackage{booktabs}
\usepackage{multirow}
\usepackage{xltabular}
Therefore, I need to adapt `\newglossarystyle{customlong}{...}`. This is my current state:
\newglossarystyle{customlong}{
\setglossarystyle{long}
\renewenvironment{theglossary}{
\begin{longtblr}[
entry=none,
label=none
]{
colspec={llX},
hspan=minimal,
stretch=1,
width=\textwidth
}
}{\end{longtblr}}
\renewcommand{\glossentry}[2]{
\glsentryitem{##1}\textbf{\glstarget{##1}{\glossentryname{##1}}} & \SetCell[c=2]{l}\glossentrydesc{##1}\\
}
\renewcommand{\subglossentry}[3]{
& \glssubentryitem{##2}\glstarget{##2}{\strut} & \glossentrydesc{##2}\\
}
\ifglsnogroupskip
\renewcommand*{\glsgroupskip}{}%
\else
\renewcommand*{\glsgroupskip}{\\}%
\fi
}
Compilation fails with 'Misplaced alignment tab character &.'. When I replace `&` with `\&` in `\renewcommand{\glossentry}[2]{...}` and `\renewcommand{\subglossentry}[3]{...}`, the compilation is successful but the output obviously contains '&'.
I assume that this error is related to the following content of the package documentation of tabularray:
> "In contrast to traditional tabular environment, tabularray environments need to see every & and \ when splitting the table body with l3regex. And you can not put cell text inside any table command defined with \NewTableCommand. But you could use outer key expand to make tabularray expand every occurrence of a specified macro once before splitting the table body. Note that you can not expand a command defined with \NewDocumentCommand." (<https://ftp.gwdg.de/pub/ctan/macros/latex/contrib/tabularray/tabularray.pdf>, p. 30)
This would mean that I need to expand `\renewcommand{\glossentry}[2]{...}` and `\renewcommand{\subglossentry}[3]{...}`. I have no idea how to achieve that. Can someone help?
Thanks in advance.
# Edit 1: M(N)WE
This is my current code which prints '&'. Some unrelated contents have been removed. I build with `latexmk`.
## main.tex
\documentclass{scrbook}
\usepackage{tabularray}
\usepackage[
abbreviations,
nonumberlist,
record,
symbols
]{glossaries-extra}
\GlsXtrLoadResources[
src=glossaries,
not-match={entrytype=symbol}
]
\GlsXtrLoadResources[
selection=all,
src=glossaries,
type=symbols,
match={entrytype=symbol}
]
\newglossarystyle{customlong}{
\setglossarystyle{long}
\renewenvironment{theglossary}{
\begin{longtblr}[
entry=none,
label=none
]{
colspec={llX},
hspan=minimal,
stretch=1,
width=\textwidth
}
}{\end{longtblr}}
\renewcommand{\glossentry}[2]{
\glsentryitem{##1}\textbf{\glstarget{##1}{\glossentryname{##1}}} \& \SetCell[c=2]{l}\glossentrydesc{##1}\\
}
\renewcommand{\subglossentry}[3]{
\& \glssubentryitem{##2}\glstarget{##2}{\strut} \& \glossentrydesc{##2}\\
}
\ifglsnogroupskip
\renewcommand*{\glsgroupskip}{}%
\else
\renewcommand*{\glsgroupskip}{\\}%
\fi
}
\newglossarystyle{customindex}{
\setglossarystyle{index}
\renewcommand{\glstreeitem}{\parindent0pt\par}
\renewcommand{\glstreepredesc}{\par\glstreeitem\parindent40pt\hangindent40pt}
}
\begin{document}
\frontmatter
\printunsrtabbreviations[style=customlong]
\printunsrtsymbols[style=customlong]
\mainmatter
\appendix
\backmatter
\printunsrtglossary[style=customindex]
\end{document}
## glossaries.bib
@entry{gls-uml,
name = {Unified Modeling Language},
description = {\enquote{A specification defining a graphical language for visualizing, specifying, constructing, and documenting the artifacts of distributed object systems.}\footnote{\url{https://www.omg.org/spec/UML}, aufgerufen am 20.02.2023}.}
}
@abbreviation{auv,
description = {\gls{gls-auv}},
short = {AUV},
long = {Autonomous Underwater Vehicle}
}
@symbol{v-desired,
name = {\ensuremath{\overrightarrow{v_{desired}}}},
description = {Wunschgeschwindigkeit}
}
## .latexmkrc
@default_files = ('main');
$pdf_mode = 4;
$dvi_mode = 0;
$postscript_mode = 0;
$lualatex = 'lualatex -synctex=1 -interaction=nonstopmode %O %S';
push @generated_exts, 'glstex', 'glg';
$clean_ext .= ' %R.bbl %R.glstex %R.lol %R.run.xml %R-1.glstex ';
add_cus_dep( 'aux', 'glstex', 0, 'run_bib2gls' );
sub run_bib2gls {
if ($silent) {
my $ret = system "bib2gls --silent --group $_[0]";
}
else {
my $ret = system "bib2gls --group $_[0]";
}
my ( $base, $path ) = fileparse( $_[0] );
if ( $path && -e "$base.glstex" ) {
rename "$base.glstex", "$path$base.glstex";
}
# Analyze log file.
local *LOG;
$LOG = "$_[0].glg";
if ( !$ret && -e $LOG ) {
open LOG, "<$LOG";
while (<LOG>) {
if (/^Reading (.*\.bib)\s$/) {
rdb_ensure_file( $rule, $1 );
}
}
close LOG;
}
return $ret;
}
|
https://tex.stackexchange.com/questions/678834/glossary-style-with-tabularray
|
[
"glossaries",
"tabularray",
"glossaries-extra",
"xltabular"
] | 6 | 2023-03-09T01:12:54 |
[
"I am having the same Problem... I think Nicola Talbot could say if glossaries ist able to handle this or not.",
"Thank you. See my edit for mwe.",
"Welcome to TeX.SX! Can you please add a complete minimal working example starting with \\documentclass and ending with \\end{document} instead of showing only some code snippets. This would help us to test you code, reproduce the issue and find and test a suggestion."
] | 3 |
Technology
| 0 |
399
|
ai
|
Is the Bellman equation that uses sampling weighted by the Q values (instead of max) a contraction?
|
It is proved that the Bellman update is a contraction (1).
**Here is the Bellman update that is used for Q-Learning:**
$$Q_{t+1}(s, a) = Q_{t}(s, a) + \alpha*(r(s, a, s') + \gamma \max_{a^*} (Q_{t}(s', a^*)) - Q_t(s,a)) \tag{1} \label{1}$$
The proof of (\ref{1}) being contraction comes from one of the facts (the relevant one for the question) that max operation is non expansive; that is:
$$\lvert \max_a f(a)- \max_a g(a) \rvert \leq \max_a \lvert f(a) - g(a) \rvert \tag{2}\label{2}$$
This is also proved in a lot of places and it is pretty intuitive.
**Consider the following Bellman update:**
$$ Q_{t+1}(s, a) = Q_{t}(s, a) + \alpha*(r(s, a, s') + \gamma SAMPLE_{a^*} (Q_{t}(s', a^*)) - Q_t(s,a)) \tag{3}\label{3}$$
where $SAMPLE_a(Q(s, a))$ samples an action with respect to the Q values (weighted by their Q values) of each action in that state.
**Is this new Bellman operation still a contraction?**
Is the SAMPLE operation non-expansive? It is, of course, possible to generate samples that will not satisfy equation (\ref{2}). I ask **is it non-expansive in expectation?**
My approach is:
$$\lvert\,\mathbb{E}_{a \sim Q}[f(a)] - \mathbb{E}_{a \sim Q}[g(a)]\, \rvert \leq \,\,\mathbb{E}_{a \sim Q}\lvert\,\,[f(a) - g(a)]\,\,\rvert \tag{4} \label{4} $$
Equivalently:
$$\lvert\,\mathbb{E}_{a \sim Q}[f(a) - g(a)] \, \rvert \leq \,\,\mathbb{E}_{a \sim Q}\lvert\,\,[f(a) - g(a)]\,\,\rvert$$
(\ref{4}) is true since:
$$\lvert\,\mathbb{E}[X] \, \rvert \leq \,\,\mathbb{E} \,\,\lvert\,\,[X]\,\,\rvert $$
**But, I am not sure if proving (\ref{4}) proves the theorem. Do you think that this is a legit proof that (\ref{3}) is a contraction.**
(If so; this would mean that stochastic policy q learning theoretically converges and we can have stochastic policies with regular q learning; and this is why I am interested.)
Both intuitive answers and mathematical proofs are welcome.
|
https://ai.stackexchange.com/questions/22642/is-the-bellman-equation-that-uses-sampling-weighted-by-the-q-values-instead-of
|
[
"reinforcement-learning",
"q-learning",
"proofs",
"convergence",
"bellman-equations"
] | 8 | 2020-07-23T10:32:14 |
[
"Your question in not very clear to me. Since $f(a)$ and $g(a)$ are not clear to me. The formulas are intuitive and individually correct, I am not sure whether arriving at those intuitive forms you have mentioned, so easy. Check this link for example: users.isr.ist.utl.pt/~mtjspaan/readingGroup/ProofQlearning.pdf As a side not I do not think proving convergence is so easy. There is a topic called Concentration Inequalities which have to be studied to prove convergence. I think you can use this to prove your theorems.",
"(1) is a bellman update; it is a copy paste error that rhs has t+1 (sorry about that) thanks for noticing; I fixed the error now."
] | 2 |
Science
| 0 |
400
|
economics
|
Dynamic Bertrand competition when players take turns
|
Consider the following game:
* There are two players, $i\in\\{1,2\\}$
* Time is discrete and runs to infinity during periods $t=\\{1,2,\ldots\\}$
* At eat point in time, players have a price $p_i(t)\in\mathbb{R}_+$
* Initialise the game with $p_1=p_2=p(0)$.
* In odd-numbered periods, player 1 can change his price to any $p_1\in\mathbb{R}_+$. Player 2 cannot change his price.
* In even-numbered periods, player 2 can change his price to any $p_2\in\mathbb{R}_+$. Player 1 cannot change his price.
* For each price, there is a demand $D(p)$, which is a decreasing function. The firm with the lowest price at the end of each period captures the whole demand, receiving payoff $D(p_i)p_i$ for that period. The firm with the highest price receives a payoff of zero for the period. If prices are equal then each firm gets a payoff of $pD(p)/2$.
* Players discount the future at common rate $0\leq\delta\leq1$ so a payoff of $\pi$ that occurs $t$ periods in the future has present value $\pi\delta^t$.
* Write $p^*$ for the monopoly price that maximises $D(p)p$.
The question is: can we identify a complete characterization of the set of Nash equilibria of this game?
Note that a valid strategy is a complete contingent plan, which specifies the choice of action for any history of the game.
This question follows the discussion here: [What determines the outcome of a price war, and why isn't that outcome reached instantaneously?](https://economics.stackexchange.com/questions/8398/what-determines-the-outcome-of-a-price-war-and-why-isnt-that-outcome-reached-i?noredirect=1#comment10737_8398)
|
https://economics.stackexchange.com/questions/8473/dynamic-bertrand-competition-when-players-take-turns
|
[
"game-theory",
"oligopoly",
"repeated-games"
] | 7 | 2015-09-30T11:45:29 |
[
"Is the Maskin-Tirole ECTA 1988 (the second paper, oligopoly) what you are looking for?",
"@denesp You're right. The more I think about it, the more the premise of the question doesn't make much sense.",
"Yes but the grim strategies are just a subset of all possible strategies. And what you describe is actually just a subset of all grim strategies, the condition to go grim can be quite ridiculous: \"Play $p$ unless the other players last six prices were not $\\pi, \\pi, 2, 2, \\sqrt{2}, \\sqrt{2}$ in which case play 0.\" This strategy can be part of an equilibrium (given some restrictions on $\\delta$). So a characterization of the kind you describe would not be a characterization of all equilibria.",
"@densep By characterization I mean a description of every Nash equilibrium. As you note, there are infinitely many equilibria just considering grim trigger strategies, so it is impossible to individually enumerate them all. But those equilibria can be concisely 'characterised' as \"play $p$ unless someone ever played a $p'\\neq p$ in which case play 0\", which should work for $\\delta$ large enough.",
"The word characterization is problematic. Saying that the strategies you are looking for are the strategies that constitute a Nash-equilibrium would be a tautology, but it is a characterization nonetheless. Such a tautology can be made less trivial by giving an alternate definition of Nash-equilibrium. I am not sure this is what you are looking for but you could say something about the range of payoffs: It seems to me that literally any payoff vector between the competitive solution and the cooperative solution is attainable in equilibrium using grim strategies."
] | 5 |
Science
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.