qid
int64 1
4.65M
| metadata
sequencelengths 3
3
| prompt
stringlengths 31
25.8k
| chosen
stringlengths 17
28.2k
| rejected
stringlengths 19
40.5k
| domain
stringclasses 28
values |
|---|---|---|---|---|---|
49,709
|
[
"https://mathoverflow.net/questions/49709",
"https://mathoverflow.net",
"https://mathoverflow.net/users/10909/"
] |
I'm assuming someone must have scooped me on this simple argument. Where does it (first) appear in the literature?
Fix an ultrafilter $\mu$ on $\omega$, the natural numbers.
Alice and Bob play a nim-like game. At the start each player "holds" the empty set and the
the starting "position" consists of $\omega$. Beginning with Alice, each player in turn will remove a non-empty finite initial segment from the current position (leaving some final segment of $\omega$) and deposit the removed segment into his or her holdings. Play proceeds for $\omega$ rounds.
Now the object of the game: to finish with holdings that belong to the ultrafilter $\mu$.
Strategy stealing obviates the possibility of either player possessing a winning strategy; the existence of $\mu$ thus contradicts AD. In more detail, if either player has a winning strategy, the game must admit infinitely many winning positions, but that would allow the other player to possibility of moving to a winning position on his or her very first move.
<hr>
Many sources work much harder than this to prove the weaker results that AC contradicts
AD. (Afterthought: I'd love to see a big-list question collecting theorems where unnecessarily complicated proofs permeate the literature despite the availability of simpler treatments...MO appropriate?)
|
Or just take all powers of $3$ and add to them all numbers that are congruent to $1$ modulo $3$.
|
This is false. We construct $A$ inductively, so that the following holds:
<ul>
<li>$A$ contains all powers of two larger or equal than $4$ and no other even numbers.</li>
<li>The number of odd numbers in $A$ between $2^j$ and $2^{j+1}$ is $2^{j-2}$.</li>
<li>No power of two is in a 3-AP contained in $A$.</li>
</ul>
We start by specifying that $4\in A, 5\in A, 6\notin A,7\notin A$.
Suppose $A\cap\{1,\ldots, 2^m-1\}$ has been defined so that the above properties hold. We next define $A\cap\{ 2^m,\ldots, 2^{m+1}-1\}$ as follows: $2^m\in A$. There are $1+2+\ldots+2^{m-3}<2^{m-2}$ odd numbers smaller than $2^m$ in $A$; let $O_m$ be the set of all of them. We choose $2^{m-2}$ odd numbers in
$$
\{2^m,\ldots, 2^{m+1}\} \backslash (2^{m+1}-O_m).
$$
and add them to $A$. We can do this since $|O_m|< 2^{m-2}$.
The first two properties are clear from the construction. To check the last (the one we care about), note that $2^m$ can't be the first/last term of a $3$-AP in $A$, since then the last/first term would also be even, hence another power of $2$, and then the middle one would be even, and a power of $2$ as well. But $2^m$ can't be the middle term of a $3$-AP either: for the same reason as before, the other two terms must be odd. Let $(a,2^m,c)$ be the AP. Then $a\in O_m$ by definition, but this implies $c-2^m=2^m-a$, or $c\in 2^{m+1}-O_m$, a case which was excluded in the construction.
Clearly $A$ has density $1/4$ so this completes the proof.
<hr>
If $A$ has positive upper density, one can still ask what is the largest possible size of the set $B$ of all elements of $A$ which are not in any $3$-AP contained in $A$. Clearly $B$ has density $0$ by Roth's Theorem (and we get better bounds from the quantitative bounds in Roth's Theorem). Is it possible to do better?
|
https://mathoverflow.net
|
3,045,344
|
[
"https://math.stackexchange.com/questions/3045344",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/569640/"
] |
I have a book about group theory and there was the following question:
<blockquote>
Let <span class="math-container">$G$</span> be a set of all the real matrices in the following form: <span class="math-container">$\begin{pmatrix}a & b\\
-b & a
\end{pmatrix}$</span> when <span class="math-container">$a^2+b^2>0$</span>.
<ol>
<li>Prove that <span class="math-container">$G$</span> is a group.</li>
<li>Prove that <span class="math-container">$G\cong (C^\times,\cdot )$</span>. </li>
</ol>
</blockquote>
I successfully proved that <span class="math-container">$G$</span> is a group. Now I'm trying to prove the second sub-question. In the book they suggested to declare the following function:
<span class="math-container">$$ f:\begin{pmatrix}a & b\\
-b & a
\end{pmatrix} \to a+ib$$</span>
Also they wrote "obviously <span class="math-container">$f$</span> is bijection", and then they proved the Homomorphism equation. The only part that I didn't understand is why <span class="math-container">$f$</span> is a bijection, and why it is so obvious? How can I prove it formally?
|
The only potential problem with choosing <span class="math-container">$E$</span> so <span class="math-container">$0 < \mu(E) < \infty$</span> is that there are positive measures <span class="math-container">$\mu$</span> with sets <span class="math-container">$E$</span> such that <span class="math-container">$\mu(E) = \infty$</span> and no measurable subset of <span class="math-container">$E$</span> has nonzero finite measure. But that can't happen here, because <span class="math-container">$f \in L^p$</span> and <span class="math-container">$f \ne 0$</span> on <span class="math-container">$E$</span>. Consider sets of the form <span class="math-container">$\{x: f(x) > \epsilon\}$</span> (or <span class="math-container">$< - \epsilon$</span>).
|
Here's one problem: Having <span class="math-container">$f\ne 0$</span> on <span class="math-container">$E$</span> doesn't imply that
<span class="math-container">$$
\int_E f\, d\mu \ne 0.
$$</span>
For instance, the positive and negative part of <span class="math-container">$f$</span> could cancel each other on <span class="math-container">$E$</span>.
I suggest that you use the fact that for <span class="math-container">$X=L^p$</span> you have <span class="math-container">$X^*\simeq L^q$</span>.
|
https://math.stackexchange.com
|
594,479
|
[
"https://physics.stackexchange.com/questions/594479",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/257503/"
] |
For an Intro. Thermal Physics course i am taking this year, I had a simple problem which threw me off-guard, I would appreciate some input to see where i am lacking. The problem is as follows:
Does the entropy of the substance decrease on cooling? If so, does the <strong>total</strong>
entropy decrease in such a process? Explain.
<strong>Here is how i started this:</strong>
->Firstly, for a body of mass <em>m</em> and specific heat, <em>c</em>(assuming it is constant) the heat absorbed by the body for an infinitesimal temperature change is <span class="math-container">$dQ=mcdT$</span>.
->Now if we raise the temperature of the body from <span class="math-container">$T_1$</span> to <span class="math-container">$T_2$</span>, the entropy change associated with this change in the system is <span class="math-container">$\int_{T_1}^{T_2}mc\frac{dT}{T}=mcln\frac{T_1}{T_2}$</span>. This means the entropy of my system has increased. Up to this was fine.
<strong>I face difficulty in the folowing:</strong>
<*><em>Is this process, the act of heating this solid, a reversible or an irreversible one?</em> Now, I know that entropy is a state variable, so even if it was irreversible, so to calculate the entropy change for the system during this process we must find a reversible process connecting the same initial and final
states and calculate the system entropy change. We can do so if we imagine that we have at our disposal a
heat reservoir of large heat capacity whose temperature T is at our control.
We first adjust the reservoir temperature to <span class="math-container">$T_1$</span> and put the object in contact with the reservoir. We then slowly (reversibly) raise the reservoir temperature from <span class="math-container">$T_1 to T_2$</span>. The body gains entropy in this process, the amount i have calculated above.
According to the main problem, if i were to reverse this process and slowly lower the temperature of the body from <span class="math-container">$T_2$</span> to <span class="math-container">$T_1$</span> wouldn't the opposite were to happen? i.e. the body loses entropy to the reservoir, the same amount as calculated above, but different signs?
<*> From above discussion, can i say that the net entropy of the system+surroundings is zero? Had it been a reversible process then from the second law i know it would've been zero, even if it is irreversible, as long as i connect the same two states with a reversible path, the net still comes out to be zero.
Am i right to think of it as such? I had this problem of discerning which is reversible/irreversible for a while.
|
Simple answer yes,
Think about taking two extreme cases :
How much does a slinky extend in a gravity-free space? None at all
How much would it extend if it was on perhaps Jupiter or even a black hole ?It should extend by a large amount.
Gravity does play a role.
|
If a slinky is hanging vertically in a gravitational field, the amount of stretch in any short section depends on the weight of the coil hanging below that section. Less gravity will produce less stretch.
|
https://physics.stackexchange.com
|
652,455
|
[
"https://electronics.stackexchange.com/questions/652455",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/248767/"
] |
I was thinking about diamonds, and how they're excellent thermal conductors and yet at the same time very good electrical insulators.
Does the opposite of diamond exist, i.e. are there commonly available, inert (as in, safe to use/handle) conductors with poor thermal conductivity?
|
<blockquote>
<em>Does infinite magnetic permeability (e.g. for an ideal transformer)
violate conservation of energy?</em>
</blockquote>
Infinite magnetic permeability produces infinite inductance (even for a single turn coil) and, the rate of change of current that can be produced through an infinite inductance when applying a <strong>finite</strong> voltage is zero.
Hence, no current can flow in and, no energy can be inputted and, no violation of the conservation of energy.
<blockquote>
<em>Creating infinite magnetic field lines from a finite applied field</em>
</blockquote>
It can't be done without applying infinite voltage for an infinite amount of time.
<sub>Magnetic permeability is a material constant (just like electrical resistivity); it doesn't imply that any source of current is applied that might create a H-field just as electrical resistivity doesn't imply a flow of current due to an applied voltage.</sub>
|
The piece would "short out" the H field. The energy in a unit volume of a magnetic core is B times H, and -- of necessity -- the H field would be zero. So the energy stored in the core would be zero.
I'm not sure how this would work out with a solenoid core (i.e., an open core). For a toroidal core a simple coil would have infinite inductance (which is expected).
|
https://electronics.stackexchange.com
|
131,470
|
[
"https://security.stackexchange.com/questions/131470",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/119006/"
] |
I recently had a call from whom I thought was my broadband company. As the call went on, I realised that it was someone who was trying to hack my bank details. They asked for a card reader, as they wanted to refund me. An amount of money which was not true, but cut the story short. They convinced me that I needed to load some software onto my computer. Now I am worried that my child is not safe on her computer. Could you please send me some advice what I should do?
|
I think your basically on the right track. Your client needs to provide some
token which your server will recognise or accept as proof that the client is
legitimate. How to do this depends on a number of factors and what the risks are
you are trying to protect against. There is no one single solution.
Some of the things to consider may include
<ul>
<li>Complexity. This is possibly the biggest threat to both security and
reliability. The more complex the solution is, the more difficult it is to
prove correct. This can mean both an increase in security flaws as well as
just normal bugs and will result in higher maintenance costs. As the saying
goes "everything should be as simple as possible, but no simpler".</li>
<li>Level of assurance. What is it you need to be assured about. Do you just need
to know the client will behave correctly i.e. make requests which the server
can understand and respond to rather than just consume resources and possibly
degrade service or do you actually need to know it is a specific approved
client or perhaps an approved user or maybe even coming from an approved
source (IP address). If you need high levels of assurance, what do you need to
do to prevent clients from lying - trying to trick your server by providing
fake credentials and what do you need to do to prevent theft of credentials
(from sniffing network packets, MitM (man in the middle) attacks etc. </li>
<li>The robustness of your server. Your server should be liberal in what it will
accept and conservative in what it sends. This basically means your server
should handle input in such a way that it will recover from bad input
(incorrect format, corrupted data, etc) and not simple crash or exhibit
unexpected behaviour (such as dumping out sensitive data tot he client) when
provided with unexpected input and will be conservative in the responses it
sends (i.e. predictable and consistent). </li>
<li>The inherent value. What is the inherent value to you, your client and
others. This will help determine what level of protection you need to
consider. The likelihood of attack depends to some degree on the value (or
perceived value) of the service or data to an attacker. This can be difficult
to assess as you cannot always identify the motives an attacker may have. In
some cases, it is easy, such as when involving assets with a monetary
value. Other times, less so, such as when it involves 'bragging rights',
revenge or personal grudges or possibly some misguided belief or
understanding.</li>
<li>Understand the architecture and underlying technology. It is important to have
a good basic understanding of TCP/IP in order to identify the most appropriate
controls. For example, the difference between basic protocols (udp v tcp), the
basics of how the connection is made as this will determine which controls are
most appropriate - for example, understanding at what point during the
connection you can make a decision or even whether a certain decision is even
meaningful and what level of trust you can put in the information your being
given. For example, it is easy to spoof the source address of a UDP packet
because there is no two way connection handshake, but harder to do so with a
TCP socket because there is. </li>
<li>Avoid 'rolling your own'. Don't try to invent your own security solution. Use
an established and tried technique. In this respect, your question indicates
your heading in the right direction. </li>
</ul>
Consider some basic use cases such as a standard web site (essentially a server
with clients who connect to a specific port) compared to a pay-per-use service,
such as Audible. In the first case, most of the time, the server is less
interested in the specific individual or client. What the server wants to know
is that the client understands the protocol and sends commands the server
understands. In the second case, the server needs to know the client is
legitimate, the user is a know paid up subscriber and possibly that the client
is an approved client (in the case of Audible, the approved clients are able to
verify encryption keys so that the client can decrypt the audible book which has
DRM protection). In these two cases, the requirements are vary different and
will be addressed in different ways. In the first case, you really just need to
know the client understands your protocol and will likely behave in an
acceptable manner. in the second case, you need to protect against forgeries and
will likely need more complex functionality, such as encrypted communications,
shared encryption keys, login credentials etc.
In your case, where your experimenting and learning, you want to start of
simple. Defining a simple protocol which requires the client to send a known
'fact', such as a string or key is probably fine. However, it also depends on
the environment your working in. If your just experimenting on systems within a
LAN which has reasonable firewalls between you and the Internet, then you
probably only need to worry about systems on your LAN. You should ensure your
using non-privileged ports (i.e. above 1024) and avoid using ports which already
have a well defined or common use and don't leave your server operational when
not actively using it. If on the other hand your operating in a more accessible
environment, such as using a cloud based platform, then you may need to be a
little more defensive - log what is connecting, perhaps limit connections to the
IP addresses of your test clients etc. Note that advice not to roll your own
security solution can be ignored when you are just experimenting or learning. In
some cases, trying to first solve the problem yourself is not a bad way of
learning and can help in later understanding of why established solutions are
designed they way they are - just don't try to do it for a real application.
There are many books which describe how to secure server applications. Key
topics to cover would be things like network and application firewalls and
TCP/IP. common encryption and hashing techniques and the applicaitons/protocols
which use them (HTTPS, SSL/TLS, SSH, PGP/GPG). Have a look at existing
protocols. The HTTP/HTTPS protocol is a good starting point as it is relatively
simple. Don't get overwhelmed by the complexities, especially with respect to
things like encryption and hashing. These are vary complex topics, but you
really just need to understand the principals and how to apply them rather than
the vary technical detail. this is the main reason you should use a known and
tested approach rather than try to invent your own.
You may also find some of the following useful
<pre><code>- https://www.owasp.org
- https://www.feistyduck.com/books/bulletproof-ssl-and-tls/bulletproof-ssl-and-tls-introduction.pdf
- https://letsencrypt.org/
</code></pre>
The OWASP site is particularly useful as it has lots of great information on
common security flaws/mistakes developers make and how to both detect and
prevent them. Much of this information can be generalised to any server/client
environment.
|
Most services do not validate the application being used as it is extremely hard to get a relatively reliable answer. Therefore what they can validate is the user through authentication and possibly their location through the IP address.
That said, some servers do validate the application, but because there is <strong>always</strong> a way to reverse engineer and bypass the verification, they have to constantly update it. An example of those would be certain poker applications that use multiple levels of obfuscation and regularly (almost daily) do updates that change the security codes and their location in the program to make it very hard to find before the next update.
Because of the effort needed and the remaining impossibility to be 100% sure, I advise you against doing that.
In any case, you should do validations on the server based on the data that is received to determine if it appears legitimate or not, <strong>never</strong> relying on what seems to be happening on the user's machine.
|
https://security.stackexchange.com
|
509,039
|
[
"https://physics.stackexchange.com/questions/509039",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/134777/"
] |
Coulomb's law states that if we have two charges <span class="math-container">$q_{1}$</span> and <span class="math-container">$q_{2}$</span>, then <span class="math-container">$q_{1}$</span> will act on <span class="math-container">$q_{2}$</span> with a force <span class="math-container">$$ \textbf{f}_{12}=\frac{q_{1}q_{2}}{r_{12}^2} { \hat {\textbf {r}}_{12}},$$</span>
and <span class="math-container">$q_{2}$</span> will similarly act on <span class="math-container">$q_{1}$</span> with a force <span class="math-container">$\textbf{f}_{21}$</span> such that
<span class="math-container">$$\,\textbf{f}_{21}=-\textbf{f}_{12}.$$</span>
Suppose the only things we knew was that the repulsive forces vary like <span class="math-container">$r^{-2}$</span>, and that they depend on the magnitude of the charges involved. Can we infer from these two observations alone that <span class="math-container">$\textbf{f}_{21}=-\textbf{f}_{12}$</span>? Or would we need further experiments to establish this equation?
The collinearity can be deduced from symmetrical considerations. What about the magnitude?
|
<blockquote>
Suppose the only things we knew was that the repulsive forces vary like <span class="math-container">$r^{-2}$</span>, and that they depend on the magnitude of the charges involved. Can we infer from these two observations alone that <span class="math-container">$\textbf{f}_{21}=-\textbf{f}_{12}$</span>?
</blockquote>
No. In fact, it is not true in general, for a system of two charged particles, that the force acting on charge 1 and the force acting on charge 2 obey Newton's third law at a particular time, given a particular frame of reference. In general there is radiation, Coulomb's law is false, and momentum is exchanged between the charges and the radiation.
|
It is worth repeating that <strong>laws</strong> in physics are <strong>axioms</strong>, there is no proof or derivation other than that the law is necessary, so that a physical mathematical theory can choose those solutions that will <strong>fit existing data</strong> and, important, will be <strong>predictive</strong> in new situations. Laws in effect are a distillate of data.
Coulomb's law defines one of the possible forces, so that Newton's laws can be used in order to have classical mechanics solutions and predictability in kinematic problems involving charges.
|
https://physics.stackexchange.com
|
736,210
|
[
"https://physics.stackexchange.com/questions/736210",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/346417/"
] |
My question is in regards to the variational principle in approximating the wavefunction of Helium.
<strong>Some Background:</strong>
<span class="math-container">$$\hat{H}=-\frac{\hbar^2}{2m_{e}}\nabla_{1}^{2}-\frac{\hbar^2}{2m_{e}}\nabla_{2}^{2}-\frac{Ze^2}{4\pi\epsilon_{0}r_{1}}-\frac{Ze^2}{4\pi\epsilon_{0}r_{2}}+\frac{e^2}{4\pi\epsilon_{0}|r_{1}-r_{2}|}$$</span>
<span class="math-container">$$\hat{H}\Psi(\vec{r_{1}},\vec{r_{2}}) =E\Psi(\vec{r_{1}},\vec{r_{2}})$$</span>
For many electron atoms such as helium, the (nonrelativistic) electronic TISE cannot be solved analytically as a result of the repulsion term in the electronic Hamiltonian. The first approximation to the wavefunction of helium is done by completely neglecting the repulsive term. Neglection of the repulsive term in the electronic Hamiltonian is called the orbital approxiamtion and it allows for the Hamiltonian to separate into the sum of single electron Hamiltonians and the total wavefunction to take the form as the product of single-electron wavefunctions.
<span class="math-container">$$\hat{H}=-\frac{\hbar^2}{2m_{e}}\nabla_{1}^{2}-\frac{\hbar^2}{2m_{e}}\nabla_{2}^{2}-\frac{Ze^2}{4\pi\epsilon_{0}r_{1}}-\frac{Ze^2}{4\pi\epsilon_{0}r_{2}}=\hat{H}_{1}+\hat{H}_{2}$$</span>
<span class="math-container">$$\Psi(\vec{r_{1}},\vec{r_{2}})=\psi(\vec{r_{1}})\psi(\vec{r_{2}})$$</span>
<span class="math-container">$$\psi(\vec{r_{1}})=\frac{1}{\sqrt{\pi}}\left(\frac{Z}{a_{0}}\right)^{\frac{3}{2}}e^{-\frac{Z}{a_{0}}r_{1}}$$</span><span class="math-container">$$\psi(\vec{r_{2}})=\frac{1}{\sqrt{\pi}}\left(\frac{Z}{a_{0}}\right)^{\frac{3}{2}}e^{-\frac{Z}{a_{0}}r_{2}}$$</span>
This approximation is poor, however, one such way of optimizing the approximation is by introducing an effective nuclear charge <em>ζ</em> in place of the nuclear charge <em>Z</em>. The effective nuclear charge is a parameter that can be optimized using the variational principle to produce the lowest energy.
<span class="math-container">$$E=\frac{\int\phi^*\hat{H}\phi d\tau }{\int\phi^*\phi d\tau}$$</span>
<strong>So here is my question:</strong>
When the variational principal is applied, the trial wavefunction <em>ϕ</em> is the product of single-electron wavefunctions derived from the orbital approximation with <em>ζ</em> as a parameter, however the Hamiltonian that is used is the full electronic Hamiltonian (including the repulsive term). Why is the full Hamiltonian used in the variational principle instead of the approximate Hamiltonain?
|
<span class="math-container">$\newcommand{\bra}[1]{\langle #1 \rvert}$</span>
<span class="math-container">$\newcommand{\ket}[1]{\lvert #1 \rangle}$</span>
<span class="math-container">$\newcommand{\amat}[4]{\left(\begin{matrix}#1 & #2 \\ #3 & #4 \end{matrix}\right)}$</span>
<blockquote>
Why is the full Hamiltonian used in the variational principle instead of the approximate Hamiltonian?
</blockquote>
Because we seek an approximate solution to the full Hamiltonian. Any approximate solution to the ground state will have a higher energy, so we can hope to effectively and practically use the variation principle to find a state that is "close" to the ground state by minimizing the expectation value of the Hamiltonian with respect to the parameters of the trial solution.
One typical formulation of the variation principle takes a parametrized function <span class="math-container">$\psi$</span> (subject to <span class="math-container">$\bra{\psi}\psi\rangle=1$</span>) and constructs:
<span class="math-container">$$
\bra{\psi}\hat H\ket{\psi}\tag{1}
$$</span>
The quantity in Eq. (1) is then varied. If the variation is completely arbitrary, the variation recovers the TISE and its solution. If not, the function is not an exact solution and the variational estimate in Eq. (1) is always an upper bound for the lowest energy eigenvalue. Thus the variation principle may be especially useful for determining the ground state. (Upper bounds on the higher energy states can also be determined by using trial wavefunctions that are orthogonal to the exact eigenfunction of lower states). (See Bethe and Jackiw "Intermediate Quantum Mechanics" at pages 8-10, and 48.)
|
This is because the variational approach seeks an optimal wavefunction for a <em>fixed</em> Hamiltonian.
What is unusual about the helium atom is that the guess function is a solution to the problem <em>with interactions</em> but its form is very closely related to the form of the exact solution <em>without inteaction</em>. The interactions here are <em>fixed</em>; only the function is parametrized.
|
https://physics.stackexchange.com
|
1,562,294
|
[
"https://math.stackexchange.com/questions/1562294",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/244966/"
] |
(Quant Job interviews Questions and Answers Q3.22)
<blockquote>
Suppose we have an ant travelling on edges of a cube going from one vertex to the other. The ant never stops and it takes it one minute to go along one edge. At every vertex the ant randomly picks one of the three available edges and starts going along that edge. We pick a vertex of the cube and puts the and there. What is the expected number of minutes that it will take to treturn to that vertex?
</blockquote>
Reading up on this it seems I need to learn about Markov chains to answer this properly, however for now here is my incorrect attempt : please could you tell me why it is incorrect ?
<blockquote>
Assuming 'randomly' means uniformly with $p=\frac{1}{3}$ , and observing that returning to the vertex must take an even number of steps then by brute force:
$$ \mathbb E (T) = \sum_{n=1}^{\infty} 2n * (\frac{1}{3})^{2n} = 2 \sum_{n=1}^{\infty} n* (\frac{1}{9})^{n} = 2 \frac{1\over9}{({1-{1\over9}})^2} = {9 \over 4} $$
</blockquote>
|
Look at the distance from the origin after an even number $2k$ of minutes.
This distance can only be 0 or 2. In two minutes the ant can do the following :
$0 \rightarrow 1 \rightarrow 0$ with probability $1\cdot1/3$
$0 \rightarrow 1 \rightarrow 2$ with probability $1\cdot2/3$
$2 \rightarrow 1 \rightarrow 0$ with probability $2/3\cdot1/3$
$2 \rightarrow 3 \rightarrow 2$ with probability $1/3\cdot1$
$2 \rightarrow 1 \rightarrow 2$ with probability $2/3\cdot2/3$
The probability of going from distance 2 to distance 2 in 2 minutes without passing the start vertex is then $7/9$.
The probability of returning after $2k$ minutes is then
$p_{2}=1/3 $
$p_{2k}=\frac{2}{3}\cdot\left(\frac{7}{9}\right)^{k-2}\cdot\frac{2}{9}$ when $k\geq2$, we can check that these actually sum to 1.
The expectation time to return to the origin is then
$$ E(T) = \frac{2}{3} +\sum_{k=2}^{\infty}\frac{8k}{27}\left( \frac{7}{9}\right)^{k-2} = 8 $$
|
To model this as a Markov chain, let $$S=\{(0,0,0),(0,1,0),(0,0,1),(0,1,1),(1,0,0),(1,1,0),(1,0,1),(1,1,1)\}$$ and $P$ an $8\times8$ matrix with $P_{ij}=\frac13$ if $i$ and $j$ vary in exactly one digit, $0$ otherwise. Let $\{X_n:n\geqslant 0\}$ satisfy $$\mathbb P(X_{n+1}=j\mid X_0=i_1, \ldots, X_{n-1}=i_{n-1},X_n=i)=\mathbb P(X_{n+1}\mid X_n=i)=P_{ij}. $$
Then $\{X_n:n\geqslant 0\}$ is a Markov chain, and by symmetry, both the rows and columns of $P$ sum to $1$. Since $S$ is finite, $X$ has the unique stationary distribution $\pi$ being the uniform distribution over $S$ (i.e. $\pi_i = \frac18$ for each $i\in S$). Let
$$\tau_{ij} = \inf\{n>0:X_n=j\mid X_0=i\} $$
for each $i,j\in S$. It is known that $$\mathbb E[\tau_{ii}] = \frac1{\mathbb\pi_i}, $$
so in this case the expected number of minutes to return to a vertex would be $8$.
|
https://math.stackexchange.com
|
559,643
|
[
"https://electronics.stackexchange.com/questions/559643",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/248108/"
] |
As an electronics hobbyist, I've already built a thing or two so this didn't seem like a complicated thing to do, but I was terribly mistaken. I wanted to build an FM modulated radio transceiver controlled by an Arduino board that would work anywhere between 86 and 520 MHz so that it'd include normal FM radio, VHF and UHF amateur bands and PMR and CB channels.
I expected there to be a miracle IC that would just require an audio and carrier wave input, rf amp and antenna, or that there would be plenty of similar projects already done that I could bounce off of, but hours of research gave me no answers, just more questions.
I came here to ask why are the radios built always in specific bands like 136-148/200-260/400-430 MHz instead of working continuously - is there a legislative or physical limitation? And my second question is whether is there a way to approach this problem that would be friendly for someone who usually works with digital stuff (like an IC or module) instead of analog/radio electronics.
Thanks.
EDIT: Thank you all for your time, you were very helpful.
|
<blockquote>
I expected there to be a miracle IC that would just require an audio and carrier wave input
</blockquote>
Ah, but FM actively modulates a carrier; you can of course use the external oscillation input as carrier for a superhet design, but then you'll still need to generate the FM-modulated IF <em>and</em> suppress the leakage of the oscillator at the output (wíthout suppressing the frequency-varying intended carrier). That's a rather complex thing to do in a single IC.
<blockquote>
I came here to ask why are the radios built always in specific bands like 136-148/200-260/400-430 MHz instead of working continuously - is there a legislative or physical limitation?
</blockquote>
Yes :D
both, mostly!
Also: If you only offer specific bands as device manufacturer, you don't have to guarantee performance in between. So, since it's not a big market you'd target with anything that's not commercially legal to do (the couple thousand ham rigs you could sell at most ... pffft).
When you do a superhet FM transmitter, you produce RF at <span class="math-container">\$f_{\text{LO}} \pm f_{IF}\$</span> (and of course other harmonics/intermodulation products), where your message signal is actually a frequency-changing <span class="math-container">\$f_{IF}\$</span>, but you only want the sum, not the difference (or vice versa); to isolate the sum frequency for a clean signal, you will need to filter everything below <span class="math-container">\$f_{LO}+f_{IF}\$</span>. That only works with a fixed filter bank if you can't pick from more than an octave of <span class="math-container">\$f_{LO}\$</span>.
<blockquote>
And my second question is whether is there a way to approach this problem that would be friendly for someone who usually works with digital stuff (like an IC or module) instead of analog/radio electronics.
</blockquote>
Sure; you could generate an IF signal with e.g. a microcontroller (FM modulation of a carrier between say 100 Hz and 75.1 kHz is not that mathematically hard to do); then, mix that up with about any LO (you can buy digitally controllable oscillators, Silabs has such) using about any mixer (SA612 is certainly a classic). Then, you get all the intermodulation products, and your filtering needs to select the one you actually want to transmit.
A <strong>very</strong> digital way to do that is the rpi_tx software, which uses the PWM units on a raspberry Pi SoC as generator for an RF signal; you'll have to add a solid amount of filtering to get rid of these harmonics you <em>don't</em> want (you only want exactly one of them).
|
Most of the answers are treating the transmit side. Basically however, the fundamental problem is that building such wideband tranceivers, fm, am or whatever is just very difficult.
I suspect that you are not going to find any transceivers that cover a continuous range up to 500 MHz. If you did, they would probably include FM along with everything else cuz once you have the basic circuitry in place for a transceiver, it is easy to add different modes.
On the legislative side, there is a sharp difference between broadcast FM and FM for voice communications. Basically, FM designed for voice is narrow band, it does not occupy a much greater bandwidth then a similar AM signal. It is a little bit better from a noise perspective than AM., but it is quite spectrally efficient.
Broadcast FM, on the other hand, occupies a much greater bandwidth because it uses a much higher deviation ratio. This provides much better noise suppression when you have a good strong signal. However a broadcast FM signal is somewhere around 100 or 120 kilohertz wide (I cannot remember exactly).
|
https://electronics.stackexchange.com
|
9,302
|
[
"https://physics.stackexchange.com/questions/9302",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1927/"
] |
I think I saw in a video that if dark matter wasn't repulsive to dark matter, it would have formed dense massive objects or even black holes which we should have detected.
So, could dark matter be repulsive to dark matter? If so, what are the reasons? Could it be like the opposite pole of gravity that attracts ordinary matter and repulses dark matter?
|
Lubos Motl's answer is exactly right. Dark matter has "ordinary" gravitational properties: it attracts other matter, and it attracts itself (i.e., each dark matter particle attracts each other one, as you'd expect).
But it's true that dark matter doesn't seem to have collapsed into very dense structures -- that is, things like stars and planets. Dark matter does cluster, collapsing gravitationally into clumps, but those clumps are much larger and more diffuse than the clumps of ordinary matter we're so familiar with. Why not?
The answer seems to be that dark matter has few ways to dissipate energy. Imagine that you have a diffuse cloud of stuff that starts to collapse under its own weight. If there's no way for it to dissipate its energy, it can't form a stable, dense structure. All the particles will fall in towards the center, but then they'll have so much kinetic energy that they'll pop right back out again. In order to collapse to a dense structure, things need the ability to "cool."
Ordinary atomic matter has various ways of dissipating energy and cooling, such as emitting radiation, which allow it to collapse and not rebound. As far as we can tell, dark matter is weakly interacting: it doesn't emit or absorb radiation, and collisions between dark matter particles are rare. Since it's hard for it to cool, it doesn't form these structures.
|
Dark matter surely has to carry a positive mass, and by the equivalence principle, all positive masses have to exert attractive gravity on other masses.
Also, from the viewpoint of phenomenological cosmology, we obviously want dark matter to attract itself. It has to attract visible matter because this is why dark matter was introduced in the first place: it helps to keep the stars in a galaxy even though they're orbiting more quickly than one would expect from the distribution of visible mass in the galaxy.
For this reason, the force between dark matter and ordinary is surely attractive. The force between dark matter and dark matter has to be attractive, too. In fact, dark matter has played the dominant role in the structure formation - the creation of the initial non-uniformities that ultimately became galaxies, clusters of galaxies, and so on. The dark matter halos are larger than the visible parts of the galaxies: the visible stars arose as "cherries on the pie" near the centers of the dark matter halos.
There's no doubt that the gravitational force between any pair of particle-like entities is attractive. This is linked to the positive mass i.e. positive energy - which is needed for stability of the vacuum (if there existed negative-energy states, the vacuum would decay into them spontaneously which would be catastrophic and is not happening) - and the basic properties of general relativity. In particular, there's a lot of confusion among the laymen whether antimatter has an attractive gravity. Yes, of course, the matter-antimatter and antimatter-antimatter gravitational forces are known to be attractive, too.
The non-gravitational forces between dark matter are almost certainly short-range forces. In particular, dark matter doesn't interact with electromagnetism, the only long-range non-gravitational force (mediated by a massless photon) we know - that's why it's dark (emits no light).
The only repulsive force that arises in similar cosmological discussions is one due to dark energy - or the cosmological constant, to be more specific. Dark energy is something very different than dark matter. This force makes the expansion of the Universe accelerate and it is due to the negative pressure of dark energy which may be argued to cause this "repulsive gravity". However, dark energy is not composed of any particles. It's just a number uniformly attached to every volume of space.
|
https://physics.stackexchange.com
|
138,582
|
[
"https://cs.stackexchange.com/questions/138582",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/134947/"
] |
I was looking to solve this reduction, but I dont see how to construct the new graph. It seems very simple but I'm not capable of do it.
I give you the complete explanation about this reduction.
We consider a variant of the independent set problem which we shall call, Independent Set with
a Fixed Node, in which the input contains additionally a vertex <span class="math-container">$u$</span> and it is required that the
independent set contains <span class="math-container">$u$</span>.
|
As I understand, your problem is a decision problem defined as such:
Independant set with fixed vertex (ISFV):
<ul>
<li>Input: a graph <span class="math-container">$G = (V, E)$</span>, a vertex <span class="math-container">$u \in V$</span>, an integer <span class="math-container">$k$</span>.</li>
<li>Question: is there an independent set of size <span class="math-container">$k$</span> containing <span class="math-container">$u$</span>?</li>
</ul>
Independent set (IS) is defined as:
<ul>
<li>Input: a graph <span class="math-container">$G = (V, E)$</span>, an integer <span class="math-container">$k$</span>.</li>
<li>Question: is there an independent set of size <span class="math-container">$k$</span>?</li>
</ul>
Suppose you can solve ISFV. Then you can solve IS by running ISFV for each <span class="math-container">$u\in V$</span> and checking if the answer is yes for any <span class="math-container">$u\in V$</span>. Since there are a polynomial number of vertices, the reduction is indeed polynomial.
Another way to do it is to construct the graph <span class="math-container">$G' = (V\cup\{u\}, E)$</span> (adding a vertex with no other edge), and check ISFV with <span class="math-container">$G'$</span>, <span class="math-container">$u$</span> and <span class="math-container">$k + 1$</span>, since the vertex <span class="math-container">$u$</span> can always be added to an independent set.
|
Suppose that we are given a graph <span class="math-container">$G$</span> and want to know whether it has an independent set of size <span class="math-container">$k$</span> containing <span class="math-container">$u$</span>. Such an independent set cannot contain any neighbor of <span class="math-container">$u$</span>, and so it is not hard to check that <span class="math-container">$G$</span> contains such an independent set iff the graph obtained by removing <span class="math-container">$u$</span> and all of its neighbors contains an independent set of size <span class="math-container">$k-1$</span>.
We can also reduce in the other direction by adding a dummy vertex which is disconnected from the rest of the graph.
|
https://cs.stackexchange.com
|
337,385
|
[
"https://math.stackexchange.com/questions/337385",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/67928/"
] |
Suppose we have matrices $A, B, C$ of dimensions $m \times n, m\times n, n \times l$ respectively. How can we prove $(A+B)\circ C = A\circ C + B \circ C$ (using the summation notation method?)
|
Tedious way...coefficientwise:
$$
((A+B)C)_{i,j}=\sum_{k=1}^n(A+B)_{i,k}C_{k,j}=\sum_{k=1}^n(A_{i,k}+B_{i,k})C_{k,j}
$$
$$
=\sum_{k=1}^nA_{i,k}C_{k,j}+B_{i,k}C_{k,j}=\sum_{k=1}^nA_{i,k}C_{k,j}+\sum_{k=1}^nB_{i,k}C_{k,j}
$$
$$
(AC)_{i,j}+(BC)_{i,j}=(AC+BC)_{i,j}
$$
for all $1\leq i\leq n$ and all $1\leq j\leq l$. So $(A+B)C=AC+BC$.
|
<strong>Hint:</strong> Look at the linear maps represented by $A$, $B$ and $C$.
|
https://math.stackexchange.com
|
330,602
|
[
"https://softwareengineering.stackexchange.com/questions/330602",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/82182/"
] |
the QA team should ideally do their testing on an environment that almost exactly matches the prod env (to minimize uncaught bugs that arise due to setting differences).
If that's true, does the QA team typically do the testing again on prod post deployment in the googles and amazons out there? If the answer is yes that sounds redundant.. but also how can they not test production?
|
Depends what you're doing, but often you can't test in production because the system is now attached to real resources. In the case of Amazon, would you run a real order with a real credit card and wait for the book to arrive? You often need to be careful about putting test data into a production system.
Once it's gone live, you're dependent on your monitoring systems to give you early warnings of bugs. Sudden drop in completed orders? Better roll back the update then go digging in the logs to find out what happened.
(Arguably what SpaceX have been doing is "testing in production" with their rocket recovery system: it's not really cost feasible to do dummy launches, so they launch with the real payload and then see if the system can manage to land the rocket.)
|
You don't test in production because you won't find anything you don't find in a proper test environment. Maybe some kind of quick check if deployment went ok, but this is not a big part of testing.
First you test your code (unit test).
Then you test your code in the whole application (integration test).
Then you test your code in the whole system landscape (system test).
All further testing is done by people who don't gain anything from finding or reporting bugs, which - when you think about it - sucks for the quality of the product.
|
https://softwareengineering.stackexchange.com
|
964,509
|
[
"https://math.stackexchange.com/questions/964509",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/181674/"
] |
In isosceles trapezoid $ABCD$, $AB=6$, $BC=9$, $CD=8$, and $AD=9$ find the distance from point $D$ to $BC$ (perpendicular).
|
Here are some questions you should be able to answer, which will help you to find the answer you're looking for.
<ol>
<li>What is the distance from $AB$ to $CD$? (<em>Hint</em>: Use Pythagorean Theorem.)</li>
<li>What is the area of triangle $BCD,$ given the answer to the first question?</li>
<li>If we consider $BC$ as the "base" of the triangle $BCD,$ what is the answer to your question, given the answer to the second question? (<em>Hint</em>: Try rotating your picture. What is the height of the triangle $BCD$ if $BC$ is the base?)</li>
</ol>
If you have trouble answering any of these questions, let me know, but give them your best shot.
|
A picture can help a lot:
$\hspace{3cm}$<img src="https://i.stack.imgur.com/00Dtl.png" alt="enter image description here">
<strong>Hint:</strong> Find a couple of similar right triangles (one of which you know a couple of sides).
|
https://math.stackexchange.com
|
71,509
|
[
"https://physics.stackexchange.com/questions/71509",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/27232/"
] |
The classic example of an indeterministic system is a radioactive isotope, e.g. the one that kills Schrödinger's cat.
I get there are arguments against hidden variables in quantum mechanics, but how could they be so sure, back in the twenties, that the strong nuclear forces involved in radioactivity were not governed by hidden variables rather than true randomness?
Einstein was very unhappy about the indeterminism of quantum mechanics regarding even well understood effects like Young's slit experiments, but it seems kind of ideological and brash on behalf of Heisenberg & Co to extend the indeterminism over to phenomena they hadn't even begun to understand, like alpha decay.
Is there a reason for this early self-assuredness in postulating indeterminsm?
|
Schrödinger came up with the cat in 1935, which was relatively late in the development of quantum mechanics.
Back in the 1920's there had been a lot more uncertainty. The Copenhagen school had wanted to quantize the atom while leaving the electromagnetic field classical, as formalized in the Bohr-Kramers-Slater (BKS) theory. De Broglie's 1924 thesis included a hypothesis that there were hidden variables involved in the electron. In the 20's virtually nothing was known about the nucleus; the neutron had been theorized but not experimentally confirmed.
But we're talking about 1935. This was after the uncertainty principle, after Bothe-Geiger, after the discovery of the neutron, and after the EPR paper. (Schrödinger proposed the cat in a letter discussing the interpretation of EPR.) By this time it had long ago been appreciated that if you tried to quantize one field but not another (as in BKS), you had to pay a high price (conservation of energy-momentum only on a statistical basis), and experiments had falsified such a mixed picture for electrons interacting with light. It would have been very unnatural to quantize electrons and light, but not neutrons and protons. Neutrons and protons were material particles and therefore in the same conceptual category as electrons -- which had been the <em>first</em> particles to be quantized. Ivanenko had already proposed a nuclear shell model in 1932.
|
It was known that a nucleus existed back in the 20's. If you ever did experiments with nuclear decay you would see the hallmarks of a Poisson process. I am talking about simple undergraduate experiments, which, I guess, that in most countries even theoreticians specializing in other branches have to do before they get their degree (like I did). You could also see that the half-life of a sample does not depend on the size of the sample. This invalidates any claim that there is some unknown deterministic interaction between nuclei which causes the statistical-like behavior, because then the physics would change with the size of the sample. Therefore, one can conclude only that there was some hidden determinism inside the nucleus itself which causes decay to simulate a Poisson process. This would make a nucleus a highly complicated system, indeed. Statistical physics teaches us how equilibrium classical systems behave. Why is the nucleus not at equilibrium with itself?! The nucleus would have to be very special indeed to deviate from Boltzmann' theory. This would require highly unusual behavior and completely unknown physical mechanisms. A theory like that would look very unnatural. It is a much better and more natural conclusion to extend quantum indeterminism known from previously understood experiments to the nucleus itself. In the end this approach proved right. When you actively research a completely new theory you can never be 100% sure that what you are doing is correct until you finish your research. You need to look for the most natural and consistent theory you can and hope for the best. :)
|
https://physics.stackexchange.com
|
20,410
|
[
"https://dba.stackexchange.com/questions/20410",
"https://dba.stackexchange.com",
"https://dba.stackexchange.com/users/2321/"
] |
I am more experienced with SQL Server and Sybase than Oracle, and understand those products well. I've been asked to look for ways to reduce the server estate running Oracle. I understand that an instance in Oracle maps to a database hosting many tablespaces. I have a fairly good grasp of the fundamentals, however if I wanted to consolidate SERVER1,..,SERVER4 running Oracle database into one server what would be the best way to do it physically? I am considering Virtual as well using a DBaaS (Database as a Service) model, but am curious if it can/should be done physically.
Is it possible to have four separate instances point to four separate databases on one machine? Or would I have to merge the four databases into one database on the consolidated server and manage the schemas to ensure there are no name conflicts? If I did that would I have one instance or four?
I have read the documentation but I'm still not 100% sure about this area.
|
You have two options:
<ol>
<li>Run multiple Oracle instances on the same machine</li>
<li>Consolidate all of your Oracle instances into a single instance,
placing the data in separate schemas</li>
</ol>
Since you're familiar with SQL Server/Sybase, I'll explain the difference between them & Oracle as far as databases and users are concerned.
<ul>
<li>A SQL Server database is equivalent to an Oracle Schema. An Oracle schema is owned by a single user</li>
<li>A SQL Server dataserver is equivalent to an Oracle Instance</li>
</ul>
Running 4 instances on one machine is trivial, so I won't explain further.
Consolidating to a single database is also easy if the separate databases don't have conflicting schema names. If they do, it may not be an issue as long as the applications/interfaces/packages don't have hard-coded schema names - it's easy to export data from one schema in a database & import it into a different schema on another database.
|
It is fairly easy to consolidate multiple databases into one real database. In Oracle, a database is the collection of files. Users connect to the database by connecting to an instance. A database can be served by multiple instances, in which case you are running RAC.
So for simply consolidating database into one single database you don't need RAC but you could, if you want/need to do this.
If you are consolidating into one database there are a few things to take into account:
<ol>
<li>namespaces</li>
<li>service level agreements - don't make maintentance impossible by combining conflicting slas.</li>
<li>performance isolation - use Resource Manager to handle this</li>
<li>application isolation - make sure the apps all use their own tns_alias to connect</li>
<li>services - give every application an own service name in the database, if possible.</li>
</ol>
When you have naming conflicts, there is a problem, combination not possible.
You might want to take some downtime for maintenance/upgrades. If you can not get a downtime from all applications at the same time, you have a problem.
Using Resource Manager you can give a certain performance guarantee for the specific services.
Services are a smart thing to use, it is the easiest thing to see how resources are used, compared to one and other.
Easiest is to run multiple instances on a single server, each serving it's own database. This is the easiest but not the smartest thing to do. Smartest is to have a single instance on a single server. This is because every instance considers itself as the master of the server. You can not very easy isolate their resource usage, as you can do in a single instance. If you want to give some performance guarantee in a multiple instance server, the setup will grow in complexity because in many cases you need to start multiple projects and users to run your databases under.
A single database can easily support a few hundred applications, a lot cheaper than using a few hundred databases. This quickly saves BIG money on a yearly basis by making good use of Oracle features.
|
https://dba.stackexchange.com
|
260,257
|
[
"https://mathoverflow.net/questions/260257",
"https://mathoverflow.net",
"https://mathoverflow.net/users/100898/"
] |
Let $a,q$ be a positive integers. I am trying to evaluate the following sum:
$\sum_{\substack{1<a<q \\(a,q)>1 \\ (a+1,q)>1}}1$. Is there a formula that exists to calculate such sums?
Here is an example:
Let $q=15$.
We have the following multiple of divisors:
1,2,3,5,6,9,10,12,15.
But since the two pairs (5,6) and (9,10) are consecutives numbers bigger than one but less than q, we get :
$\sum_{\substack{1<a<15 \\(a,15)>1 \\ (a+1,15)>1}}1=2$.
|
At first, we count the number of residues $a$ for which both $a$ and $a+1$ are coprime with $n$. Let $q=\prod p_i^{k_i}$ be a factorization of $q$. For any $p_i$, there exist $p_i-2$ admissible remainders modulo $p_i$ (forgotten remainders are 0 and $-1$), thus $(p_i-2)p_i^{k_i-1}$ admissible remainders modulo $p_i^{k_i}$, thus by Chinese Remainders Theorem the answer equals $$F(q):=\prod (p_i-2)p_i^{k_i-1}=q\prod(1-2/p_i).$$
Now your question. There exist $\varphi(q)$ residues $a$ for which $(a,q)=1$, as many residues for which $(a+1,q)=1$, $F(q)$ residues for which both $(a,q)=1$, $(a+1,q)=1$. Thus there exist $\varphi(q)-F(q)$ residues $a$ for which $(a,q)=1$ and $(a+1,q)>1$. But the total number of $a$ for which $(a+1,q)>1$ equals $q-\varphi(q)$. Therefore the answer to your initial question is $q-2\varphi(q)+F(q)$.
|
What you have written down is already a formula for calculating the sum, so really you need to be more precise about what the question is.
But here are some comments which give a simpler formula in the case where $q$ is only divisible by a few primes.
If $q$ is a prime power $p^e$, then the sum is zero, because $(a,q)>1$ if and only if $p$ divides $a$, and that can't happen twice in a row.
If, as in your example, $q$ has two prime divisors $p_1$ and $p_2$, then $q=p_1^{e_1}p_2^{e_2}$ and what must be going on is that one of the two terms $a,a+1$ is a multiple of $p_1$ and the other a multiple of $p_2$. Hence $a$ either satisfies $a=0$ mod $p_1$ and $a=-1$ mod $p_2$ or $a=-1$ mod $p_1$ and $a=0$ mod $p_2$. In each case there is one solution mod $p_1p_2$ by the Chinese Remainder Theorem, and hence $q/p_1p_2=p_1^{e_1-1}p_2^{e_2-1}$ solutions between $1$ and $q$, giving us $2q/p_1p_2$ solutions in this case.
In the general case there is a problem though. Say three primes $p_1,p_2,p_3$ divide $q$. Then we are interested in solving $a=-1$ mod $p_1$ and $a=0$ mod $p_2$ ($q/p_1p_2$ solutions) OR $a=-1$ mod $p_1$ and $a=0$ mod $p_3$ ($q/p_1p_3$ solutions) OR... etc etc, so $3\times2=6$ possibilities giving what looks like $2q(1/p_1p_2+1/p_2p_3+1/p_3p_1)=2q(p_1+p_2+p_3)/(p_1p_2p_3)$ solutions. However unfortunately we have counted some solutions twice here -- there is one number mod $p_1p_2p_3$ which is $-1$ mod $p_1$ and $0$ mod $p_2$ and $p_3$ and we counted it too often. For three or more primes dividing $q$ it's hence messier and I'm not sure there's a simple formula.
Here's the explicit answer when 3 primes divide $q$. We may as well assume $q$ is squarefree (just multiply the answer by $q/p_1p_2p_3$ otherwise). The number of numbers between 1 and $q$ which are $0$ mod $p_1$ and $-1$ mod $p_2$ and congruent to $*$, neither $0$ nor $-1$, mod $p_3$ is $p_3-2$. Similarly for $(-1,0,*)$, $(-1,*,0)$ etc etc giving us $2(p_1+p_2+p_3)-12$. But now you need to count the number of times we are $(a,b,c)$ with $a,b,c$ all either $0$ or $-1$, but not all the same; this gives a further 6. So in this case we get $2(p_1+p_2+p_3)-6$.
The general case will be messier and I don't know if one can do better than this method.
|
https://mathoverflow.net
|
59,062
|
[
"https://dba.stackexchange.com/questions/59062",
"https://dba.stackexchange.com",
"https://dba.stackexchange.com/users/9353/"
] |
I have a table (not designed by me) which has 20 variably named columns. That is, depending on what type of record you are looking at, the applicable name of the column can change.
The possible column names are stored in another table, that I can query very easily.
Therefore, the query I'm really looking for goes something like this:
<pre><code>SELECT Col1 AS (SELECT ColName FROM Names WHERE ColNum = 1 and Type = @Type),
Col2 AS (SELECT ColName FROM Names WHERE ColNum = 2 and Type = @Type)
FROM Tbl1
WHERE Type = @Type
</code></pre>
Obviously that doesn't work, so how can I get a similar result?
'<s>
I've tried building a query string and <code>EXECUTE</code>ing it, but that just returns "Command(s) Completed Successfully" and doesn't seem to return a rowset.
</s>
It turns out I was using an incorrect query to build the dynamic SQL and as such built an empty string. SQL Server definitely executed the empty string correctly.
Note that the reason I need this to occur, rather than simply hard coding the column names, is that the column names are user configurable.
|
Try the following code:
<pre><code>CREATE TABLE #Names
(
[Type] VARCHAR(50),
ColNum SMALLINT,
ColName VARCHAR(50),
ColDataType VARCHAR(20)
)
INSERT INTO #Names VALUES
('Customer', 1, 'CustomerID', 'INT'),
('Customer', 2, 'CustomerName', 'VARCHAR(50)'),
('Customer', 3, 'CustomerJoinDate', 'DATE'),
('Customer', 4, 'CustomerBirthDate', 'DATE'),
('Account', 1, 'AccountID', 'INT'),
('Account', 2, 'AccountName', 'VARCHAR(50)'),
('Account', 3, 'AccountOpenDate', 'DATE'),
('CustomerAccount', 1, 'CustomerID', 'INT'),
('CustomerAccount', 2, 'AccountID', 'INT'),
('CustomerAccount', 3, 'RelationshipSequence', 'TINYINT')
CREATE TABLE #Data
(
[Type] VARCHAR(50),
Col1 VARCHAR(50),
Col2 VARCHAR(50),
Col3 VARCHAR(50),
Col4 VARCHAR(50),
Col5 VARCHAR(50),
Col6 VARCHAR(50),
Col7 VARCHAR(50)
)
INSERT INTO #Data VALUES
('Customer', '1', 'Mr John Smith', '2005-05-20', '1980-11-15', NULL, NULL, NULL),
('Customer', '2', 'Mrs Hayley Jones', '2009-10-10', '1973-04-03', NULL, NULL, NULL),
('Customer', '3', 'ACME Manufacturing Ltd', '2012-12-01', NULL, NULL, NULL, NULL),
('Customer', '4', 'Mr Michael Crocker', '2014-01-13', '1957-01-23', NULL, NULL, NULL),
('Account', '1', 'Smith-Jones Cheque Acct', '2005-05-25', NULL, NULL, NULL, NULL),
('Account', '2', 'ACME Business Acct', '2012-12-01', NULL, NULL, NULL, NULL),
('Account', '3', 'ACME Social Club', '2013-02-10', NULL, NULL, NULL, NULL),
('Account', '4', 'Crocker Tipping Fund', '2014-01-14', NULL, NULL, NULL, NULL),
('CustomerAccount', '1', '1', '1', NULL, NULL, NULL, NULL),
('CustomerAccount', '2', '1', '2', NULL, NULL, NULL, NULL),
('CustomerAccount', '2', '3', '2', NULL, NULL, NULL, NULL),
('CustomerAccount', '3', '2', '1', NULL, NULL, NULL, NULL),
('CustomerAccount', '3', '3', '1', NULL, NULL, NULL, NULL),
('CustomerAccount', '4', '2', '2', NULL, NULL, NULL, NULL),
('CustomerAccount', '4', '4', '1', NULL, NULL, NULL, NULL)
DECLARE @Type VARCHAR(50) = 'Account' -- Or Customer, or CustomerAccount
DECLARE @SQLText NVARCHAR(MAX) = ''
SELECT @SQLText += 'SELECT '
SELECT @SQLText += ( -- Add in column list, with dynamic column names.
SELECT 'CONVERT(' + ColDataType + ', Col' + CONVERT(VARCHAR, ColNum) + ') AS [' + ColName + '],'
FROM #Names
WHERE [Type] = @Type FOR XML PATH('')
)
SELECT @SQLText = LEFT(@SQLText, LEN(@SQLText) - 1) + ' ' -- Remove trailing comma
SELECT @SQLText += 'FROM #Data WHERE [Type] = ''' + @Type + ''''
PRINT @SQLText
EXEC sp_executesql @SQLText
</code></pre>
This returns the SELECT statement: <code>SELECT CONVERT(INT, Col1) AS [AccountID],CONVERT(VARCHAR(50), Col2) AS [AccountName],CONVERT(DATE, Col3) AS [AccountOpenDate] FROM #Data WHERE [Type] = 'Account'</code>
|
This sounds prime for a front end display solution. Query 1 would pull back your data, Query 2 would pull back the column names and in code when you build what ever structure you use to display you set the headers from the second query.
While a Pure SQL Method may be possible it will be dynamic SQL and code maintnence would be a nightmare.
Also your probably looking for <code>sp_executesql</code> and not just <code>EXECUTE N'Query String'</code> as that may fix your issue of command completed successfully.
|
https://dba.stackexchange.com
|
151,163
|
[
"https://stats.stackexchange.com/questions/151163",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/-1/"
] |
<blockquote>
Implement an estimator using Monte Carlo integration of the quantity
$$\theta=\int_0^1e^{-x^2}(1-x)dx$$ Estimate $\theta$ with a variance
lower than $10^{-4}$ by writing the variance of this estimator depending on
sample size.
</blockquote>
We can write
$$\theta=\int \phi(x)f(x)dx$$
where $\phi(x)$ is a function and $f(x)$ is a density so that $$\phi(x)f(x)=e^{-x^2}(1-x)\mathbb{I}_{(0,1)}(x)$$ The exercise leaves open the choice of the density.
Thus the estimator has the form $$\hat{\theta}=\frac{1}{n}\sum_i \phi(x_i)$$
The exercise asks for an estimate of $\theta$ with variance lower than $0.0001$ by expressing the variance of the estimator as a function of n.
|
The problem is that without knowing exactly what <span class="math-container">$\theta$</span> is, we cannot know the variance of its Monte-Carlo estimator. The solution is to <em>estimate</em> that variance and hope the estimate is sufficiently close to the truth.
<hr />
<strong>The very simplest form of Monte-Carlo estimation</strong> surrounds the graph of the integrand, <span class="math-container">$f(x) = e^{-x^2}(1-x)$</span>, by a box (or other congenial figure that is easy to work with) of area <span class="math-container">$A$</span> and places <span class="math-container">$n$</span> independent uniformly random points in the box. The proportion of points lying under the graph, times the area <span class="math-container">$A$</span>, estimates the area <span class="math-container">$\theta$</span> under the graph. As usual, let's write this estimator of <span class="math-container">$\theta$</span> as <span class="math-container">$\hat\theta$</span>. For examples, see the figure at the end of this post.
Because the chance of any point lying under the graph is <span class="math-container">$p = \theta / A$</span>, the count <span class="math-container">$X$</span> of points lying under the graph has a Binomial<span class="math-container">$(n, p)$</span> distribution. This has an expected value of <span class="math-container">$np$</span> and a variance of <span class="math-container">$np(1-p)$</span>. The variance of the estimate therefore is
<span class="math-container">$$\text{Var}(\hat \theta) = \text{Var}\left(\frac{AX}{n}\right) = \left(\frac{A}{n}\right)^2\text{Var}(X) = \left(\frac{A}{n}\right)^2 n \left(\frac{\theta}{A}\right)\left(1 - \frac{\theta}{A}\right) = \frac{\theta(A-\theta)}{n}.$$</span>
Because we do no know <span class="math-container">$\theta$</span>, we first use a small <span class="math-container">$n$</span> to obtain an initial estimate and plug that into this variance formula. (A good educated guess about <span class="math-container">$\theta$</span> will serve well to start, too. For instance, the graph (see below) suggests <span class="math-container">$\theta$</span> is not far from <span class="math-container">$1/2$</span>, so you could start by substituting that for <span class="math-container">$\hat\theta$</span>.) This is the <em>estimated variance</em>,
<span class="math-container">$$\widehat{\text{Var}}(\hat\theta) = \frac{\hat\theta(A-\hat\theta)}{n}.$$</span>
Using this initial estimate <span class="math-container">$\hat\theta$</span>, find an <span class="math-container">$n$</span> for which <span class="math-container">$\widehat{\text{Var}}(\hat\theta) \le 0.0001 = T$</span>. The smallest possible such <span class="math-container">$n$</span> is easily found, with a little algebraic manipulation of the preceding formula, to be
<span class="math-container">$$\hat n = \bigg\lceil\frac{\hat\theta(A - \hat\theta)}{T}\bigg\rceil.$$</span>
Iterating this procedure eventually produces a sample size that will at least approximately meet the variance target. As a practical matter, at each step <span class="math-container">$\hat n$</span> should be made sufficiently greater than the previous estimate of <span class="math-container">$n$</span> so that eventually a large enough <span class="math-container">$n$</span> is guaranteed to be found for which <span class="math-container">$\widehat{\text{Var}}(\hat\theta)$</span> is sufficiently small. For instance, if <span class="math-container">$\hat n$</span> is less than twice the preceding estimate, use twice the preceding estimate instead.
<hr />
<strong>In the example in the question,</strong> because <span class="math-container">$f$</span> ranges from <span class="math-container">$1$</span> down to <span class="math-container">$0$</span> as <span class="math-container">$x$</span> goes from <span class="math-container">$0$</span> to <span class="math-container">$1$</span>, we may surround its graph by a box of height <span class="math-container">$1$</span> and width <span class="math-container">$1$</span>, whence <span class="math-container">$A=1$</span>.
One calculation beginning at <span class="math-container">$n=10$</span> first estimated the variance as <span class="math-container">$2/125$</span>, resulting in a guess <span class="math-container">$\hat n = 1600$</span>. Using <span class="math-container">$1600$</span> new points (I didn't even bother to recycle the original <span class="math-container">$10$</span> points) resulted in an updated estimated variance of <span class="math-container">$0.0001545$</span>, which was still too large. It suggested using <span class="math-container">$\hat n = 2473$</span> points. The calculation terminated there with <span class="math-container">$\hat\theta = 0.4262$</span> and <span class="math-container">$\widehat{\text{Var}}(\hat\theta) = 0.00009889$</span>, just less than the target of <span class="math-container">$0.0001$</span>. The figure shows the random points used at each of these three stages, from left to right, superimposed on plots of the box and the graph of <span class="math-container">$f$</span>.
<img src="https://i.stack.imgur.com/ieA5s.png" alt="Figure" />
Since the true value is <span class="math-container">$\theta = 0.430764\ldots$</span>, the true variance with <span class="math-container">$n=2473$</span> is <span class="math-container">$\theta(1-\theta)/n = 0.00009915\ldots$</span>. (Another way to express this is to observe that <span class="math-container">$n=2453$</span> is the smallest number for which the true variance is less than <span class="math-container">$0.0001$</span>, so that using the estimated variance in place of the true variance has cost us an extra <span class="math-container">$20$</span> sample points.)
In general, when the area under the graph <span class="math-container">$\theta$</span> is a sizable fraction of the box area <span class="math-container">$A$</span>, the estimated variance will not change much when <span class="math-container">$\theta$</span> changes, so it's usually the case that the estimated variance is accurate. When <span class="math-container">$\theta/A$</span> is small, a better (more efficient) form of Monte-Carlo estimation is advisable.
|
<blockquote>
Implement an estimator using Monte Carlo integration of
$$\theta=\int\limits_0^1e^{-x^2}(1-x)dx$$
</blockquote>
While you can use a $\mathcal{U}([0,1])$ distribution for your Monte Carlo experiment, the fact that both $$x \longrightarrow \exp\{-x^2\}\quad \text{and}\quad x \longrightarrow (1-x)$$ are decreasing functions suggest that a decreasing density would work better. For instance, a <em>truncated</em> Normal $\mathcal{N}^1_0(0,.5)$ distribution could be used:
\begin{align*}\theta&=\int\limits_0^1e^{-x^2}(1-x)\,\text{d}x\\&=[\Phi(\sqrt{2})-\Phi(0)]\sqrt{2\pi\frac{1}{2}}\int\limits_0^1\frac{1}{\Phi(\sqrt{2})-\Phi(0)}\dfrac{e^{-x^2/2\frac{1}{2}}}{\sqrt{2\pi\frac{1}{2}}}(1-x)\,\text{d}x\\&=[\Phi(\sqrt{2})-\Phi(0)]\sqrt{\pi}\int\limits_0^1\frac{1}{\Phi(\sqrt{2})-\Phi(0)}\dfrac{e^{-x^2}}{\sqrt{\pi}}(1-x)\,\text{d}x\end{align*}
which leads to the implementation
<pre><code>n=1e8
U=runif(n)
#inverse cdf simulation
X=qnorm(U*pnorm(sqrt(2))+(1-U)*pnorm(0))/sqrt(2)
X=(pnorm(sqrt(2))-pnorm(0))*sqrt(pi)*(1-X)
mean(X)
sqrt(var(X)/n)
</code></pre>
with the result
<pre><code>> mean(X)
[1] 0.4307648
> sqrt(var(X)/n)
[1] 2.039857e-05
</code></pre>
fairly close to the true value
<pre><code>> integrate(function(x) exp(-x^2)*(1-x),0,1)
0.4307639 with absolute error < 4.8e-15
</code></pre>
Another representation of the same integral is to use instead the distribution with density$$f(x)=2(1-x)\mathbb{I}{[0,1]}(x)$$and cdf $F(x)=1-(1-x)^2$ over $[0,1]$. The associated estimation is derived as follows:
<pre><code>> x=exp(-sqrt(runif(n))^2)/2
> mean(x)
[1] 0.4307693
> sqrt(var(x)/n)
[1] 7.369741e-06
</code></pre>
which does better than the truncated normal simulation.
|
https://stats.stackexchange.com
|
47,476
|
[
"https://softwareengineering.stackexchange.com/questions/47476",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14/"
] |
In academia, it's considered cheating if a student copies code/work from someone/somewhere else without giving credit, and tries to pass it off as his/her own.
Should companies make it a requirement for developers to properly credit all <em>non-trivial</em> code and work that they did not produce themselves? Is it useful to do so, or is it simply overkill?
I understand there are various free licenses out there, but if I find stuff I like and actually use, I really feel compelled to give credit via comment in code even if it's not required by the license (or lack thereof one).
|
I'd say this is probably essential. For one thing, the company may need to deal with any license terms and other legal implications - just because it's "free" doesn't mean you can do what you like with it.
However, there may be an exception with example code copied and adapted from reference books. After all, that's basically what that code is there for. Even so, a comment is a good idea - someone may need to go back to the source for bugfixes (e.g. in errata), or for a better understanding of why you used it.
|
I always do. I also link back to the original source. I do this more for reference then to give credit. (So I can go back and see the original authors notes and or updates)
I think its good practice, but totally unenforceable, having a policy in place is almost worthless, as I don't think it will change anyones behavior.
|
https://softwareengineering.stackexchange.com
|
20,922
|
[
"https://electronics.stackexchange.com/questions/20922",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/5651/"
] |
I'm currently working on a project involving (I think 3mm, 1.5VDC) Infrared LEDs. However, due to my photoresistor, I think the current, voltage, or whatever (forgot) will vary greatly to minuscule amounts. So, do these LEDs need UVLOs? They are very, very sensitive and I've already wasted half the pack.
|
No.
LEDs are not damaged by low voltages.
If you are damaging LEDs, you must be driving them beyond their rated currents. Show your circuit to receive advice.
In general, very few electronic components are damaged by undervoltage. Some microprocessors can mis-execute in a brownout condition, which could have undesirable effects depending on the application. And Li-Ion batteries should not be discharged to too low a voltage.
|
LEDs cannot be damaged by "forward" voltages that are so low that they do not draw rated current.
They <strong>can</strong> be damaged by voltages that are low by normal standards.<br>
An infrared LED may easily be destroyed by a 3V3 or 5V power supply if current in excess of its maximum rated current flows.
LEDs are intended to be driven by either a constant current source of by a voltage source and a resistor such that. In both cases, maximum current is less than rated current.
In your circuit, worst case current must <strong>NEVER</strong> be able to exced maximum rated value.
LEDs may be damaged by reverse polarity voltage. Current drawn will be small, even when there is enough voltage to kill the LED.
Many LEDs are prone to electrostatic damage from "static electricity". Handling LEDs without wearing an earth strap or taking equivalent precautions may be enough.
|
https://electronics.stackexchange.com
|
336,231
|
[
"https://physics.stackexchange.com/questions/336231",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/120802/"
] |
If you (hypothetically) had an infinitely cold ice cube (an ice cube that stays at absolute zero no matter how much heat it absorbs), how long would it take for the Universe to cool down to absolute zero?
|
There is no such thing as a infinity cold ice cube.
The closest scenario I can think of is a system with a heat sink; a system coupled to a very large heat reservoir. You can them solve a heat equation.
You should also take into account that only at
$$t\rightarrow \infty $$
will the temperature of the system equal that of the reservoir, so there is no definite period of time thats answers the equation.
Instead you can ask what the characteristic time of cooling is (i.e. when it will equal 1/e of the initial temperature) or you can ask when it will reach some threshold temperature (i.e. 0.001 of the initial temp).
|
Absolute zero means that the particles aren't moving at all. Any thermal energy will be absorbed by the cube. If we assume diffusion as the only method of heat transfer, there must be a direct chain of mass from the cube to all other objects in the universe for it to absorb energy from all of them. The second there is vacuum, diffusion can no longer occur. Hence assuming an ideal heat sink, it would just reduce everything in Earth to absolute zero.
It's also true that diffusion isn't the only method of heat transfer. Electromagnetic waves and radiation can travel through vacuum and can transfer heat, but this would have little to no effect. These forms of energy are reaching us already, and the amount of energy from other objects in the universe that reaches Earth is miniscule. The ice cube would not increase what arrives.
Hence, it would only have an effect on Earth.
|
https://physics.stackexchange.com
|
413,788
|
[
"https://electronics.stackexchange.com/questions/413788",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/194133/"
] |
I am having a difficultly with producing <span class="math-container">\$S, P, Q \$</span> and <span class="math-container">\$D\$</span> from Instantaneous Power <span class="math-container">\$p(t)\$</span>.
Let's say that both voltage and current are clear sine waves. Then:
<span class="math-container">\$p(t) = Vsin(ωt+φ_V) \cdot Isin(ωt+φ_I)\$</span>
and by using the identity
<span class="math-container">\$sinA sinB = \frac{1}{2} (cos(A-B) - cos(A+B)) \$</span>
instantaneous power can be written as
<span class="math-container">\$p(t) = \frac{VI}{2}cos(φ_V-φ_I) - \frac{VI}{2}cos(2ωt+φ_V+φ_I)\$</span>
Now, it can be proved that at a pure sine wave <span class="math-container">\$A_{RMS} = \frac{A_{peak}}{\sqrt2} \$</span>, hence:
<span class="math-container">\$P = V_{RMS}I_{RMS} \cdot cos(φ_V-φ_I)\$</span>.
How to prove that <span class="math-container">\$Q = V_{RMS}I_{RMS}\cdot sin(φ_V-φ_I)\$</span> and <span class="math-container">\$S = V_{RMS}I_{RMS}\$</span>?
Moreover, how to prove that
<span class="math-container">\$P = \sum_{k=1}^\infty V_{RMS}I_{RMS} \cdot cos(φ_V-φ_I) \\ Q=\sum_{k=1}^\infty V_{RMS}I_{RMS} \cdot sin(φ_V-φ_I) \\ D = \sqrt{\sum_{k\ne j}^{\infty} U_k^2I_j^2 +U_j^2I_K^2 - 2 U_kI_kU_jI_jcos(φ_k-φ_j)}\$</span>
when voltage and current are not sines, but arbitrary, periodic waveforms?
I don't expect from anyone to provide me with the full proof, but to show me the right direction.
<hr>
EDIT:
In order to prove that <span class="math-container">\$Q = V_{RMS}I_{RMS}\cdot sin(φ_V-φ_I)\$</span> - for pure sine waves - one must analyze the current to two components. One that's in phase with the voltage and one that's <span class="math-container">\$\pm90^o \$</span> out of phase. By drawing the phasors <span class="math-container">\$\vec{V}, \vec{I_X}\$</span> and <span class="math-container">\$ \vec{I_Y}\$</span> this becomes obvious.
|
I think I found the answer I am looking for. I am presenting it here, so it will be available to anyone that is interested.
First consider:
<span class="math-container">\$v(t) = V_1cos(ω_1t+φ_{V_1}) \\ i(t)=I_1cos(ω_1t+φ_{I_1})\$</span>
then:
<span class="math-container">\$p(t) = v(t)\cdot i(t) = \\V_1I_1cos(ω_1t+φ_{V_1})cos(ω_1t+φ_{I_1}) = \\\frac{1}{2}V_1I_1(cos(φ_{V_1}-φ_{I_1})+cos(2ω_1t+φ_{V_1}+φ_{I_1})) \$</span>
Let's define <span class="math-container">\$θ = φ_{V_1}-φ_{I_1}\$</span>, hence <span class="math-container">\$φ_{V_1}+φ_{I_1} = 2φ_{V_1}-θ\$</span> and the above can be written as:
<span class="math-container">\$p(t) =\frac{1}{2}V_1I_1(cos(θ) + cos(2ω_1t+2φ_{V_1}-θ))\$</span>, but
<span class="math-container">\$cos(2ω_1t+2φ_{V_1}-θ) = cos(2ω_1t+2φ_{V_1})cos(θ)+sin(2ω_1t+2φ_{V_1})sin(θ)\$</span> and
<span class="math-container">\$\frac{1}{2}V_1I_1 = \frac{V_1}{\sqrt(2)}\frac{I_1}{\sqrt(2)}= \tilde{V_1}\tilde{I_1}\$</span>
Finally:
<span class="math-container">\$p(t) = \tilde{V_1}\tilde{I_1}cos(θ)(1+cos(2ω_1t+2φ_{V_1})) + \tilde{V_1}\tilde{I_1}sin(θ)sin(2ω_1t+2φ_{V_1}) \Rightarrow \\ p(t) =P(1+cos(2ω_1t+2φ_{V_1}))+Qsin(2ω_1t+2φ_{V_1})\$</span>
It is clear that one component of the instantaneous power is always positive (P) and the other one oscillates back and forth, with mean value equal to zero (Q). Both components have double the frequency of the initial signals and an initial phase.
Now the fun begins...
<hr>
Consider
<span class="math-container">\$v(t) = \sum_{n=1}^\infty V_ncos(nω_1t+φ_{V_n}) \\ i(t)=\sum_{n=1}^\infty I_ncos(nω_1t+φ_{I_n})\$</span>
<span class="math-container">\$p(t) = \\ V_1I_1cos(ω_1t+φ_{V_1})cos(ω_1t+φ_{I_1}) +V_2I_2cos(2ω_1t+φ_{V_2})cos(2ω_1t+φ_{I_2})+... \\
V_1I_2cos(ω_1t+φ_{V_1})cos(2ω_1t+φ_{I_2}) +
V_1I_3cos(ω_1t+φ_{V_1})cos(3ω_1t+φ_{I_3}) + ... \\
V_2I_1cos(2ω_1t+φ_{V_2})cos(ω_1t+φ_{I_1}) + ...\$</span>
Hence, it is possible to write instantaneous power as:
<span class="math-container">\$p(t) = \sum_{n=1}^\infty V_nI_ncos(nω_1t+φ_{V_n})cos(nω_1t+φ_{I_n})+\sum_{j\ne k} [V_jI_kcos(jω_1t+φ_{V_j})cos(kω_1t+φ_{I_k})+V_kI_jcos(kω_1t+φ_{V_k})cos(jω_1t+φ_{I_j})]\$</span>
As proved above, the first sum provides P and Q as:
<span class="math-container">\$P = \sum_{k=1}^\infty \tilde{V_k}\tilde{I_k}cos(θ)\\Q = \sum_{k=1}^\infty \tilde{V_k}\tilde{I_k}sin(θ)\$</span>
The second sum is what is called Distortion Power and it can be calculated as:
<span class="math-container">\$D=\sqrt{S^2-P^2-Q^2}\$</span>, where:
<span class="math-container">\$S^2 = \sum_{k=1}^\infty[{V_k^2}{I_k^2}]+\sum_{k\ne j}[V_k^2I_j^2+V_j^2I_k^2] \\
P^2 = \sum_{k=1}^\infty[V_k^2I_k^2cos^2(φ_k)]+\\
V_1I_1V_2I_2cos(φ_1)cos(φ_2)+V_1I_1V_3I_3cos(φ_1)cos(φ_3)+... \\
Q^2 = \sum_{k=1}^\infty[V_k^2I_k^2sin^2(φ_k)]+\\
V_1I_1V_2I_2sin(φ_1)sin(φ_2)+V_1I_1V_3I_3sin(φ_1)sin(φ_3)+...\$</span>
Hence
<span class="math-container">\$P^2+Q^2 = \sum_{k=1}^\infty[V_k^2I_k^2(cos^2(φ_k)+sin^2(φ_k))]-\\
V_1I_1V_2I_2(cos(φ_1)cos(φ_2)+sin(φ_1)sin(φ_2))-... = \\
\sum_{k=1}^\infty[V_k^2I_k^2]-\sum_{k\ne j}2V_kI_kV_jI_jcos(φ_k-φ_j)\$</span>
Last step is:
<span class="math-container">\$D=\sqrt{\sum_{k\ne j}V_k^2I_j^2+V_j^2I_k^2-2V_kI_kV_jI_jcos(φ_k-φ_j)}\$</span>
I skipped some calculations, but they are easy to make.
|
Starting with the <strong>definition</strong> for active power
<span class="math-container">\$P = \frac{1}{T} \int_0^T v(t) \cdot i(t) \cdot dt = \frac{1}{T} \int_0^T p(t) \cdot dt\$</span>
one gets the following expression for the power for linear networks (containing R, L, C, all constant):
<span class="math-container">\$P = \sum_{k=1}^\infty V_{k,RMS} \cdot I_{k,RMS} \cdot cos(φ_{V,k}-φ_{I,k})\$</span>
For a single (pure) sinewave there is
<span class="math-container">\$P = V_{RMS} \cdot I_{RMS} \cdot cos(φ_{V}-φ_{I})\$</span>
Above expressions have the meaning of power consumed (e.g. in form of heat).
Now there is the <strong>definition</strong> of reactive power as
<span class="math-container">\$Q = \sum_{k=1}^\infty V_{k,RMS} \cdot I_{k,RMS} \cdot sin(φ_{V,k}-φ_{I,k})\$</span>
The important point is that this is a <strong><em>definition</em></strong>, not something derived. In case of linear networks it has physical meaning, in case of nonlinear networks (containing switches, diodes, nonlinear inductors ...) is has not.
Same is true for the apparent power which is also a <strong>definition</strong> which cannot be derived, and which is defined as
<span class="math-container">\$S = V_{RMS} \cdot I_{RMS}\$</span>
It allows to compare size of rotating machines and/or transformers in a grid (of fixed frequency e.g. 0 Hz or 60 Hz) which is very useful for the power engineer.
For linear networks one can easily show from the equations (definitions) above that
<span class="math-container">\$S^2 = P^2 + Q^2\$</span>
In case of nonlinear networks the above equation does not work and another power term is added as
<span class="math-container">\$S^2 = P^2 + Q^2 + D^2\$</span>
This gives the <strong>definition</strong> of D as
<span class="math-container">\$D = \sqrt {S^2 - P^2 - Q^2}\$</span>
I found this argumentation in my old university textbook (Franz Zach, 'Leistungselektronik: Bauelemente, Leistungskreise, Steuerungskreise, Beeinflussungen', ISBN-10: 3709144590, Springer, Oct. 1980).
Short answer:
<ul>
<li>Instantaneous power p(t) results in P which is easy to show for linear networks. </li>
<li>Q is defined, not derived from p(t), with the background of linear networks. </li>
<li>S is defined, not derived from p(t), with the background of electrical machines in the grid. </li>
<li>D is defined, not derived from p(t), in order to bring together P, Q, and S in nonlinear networks.</li>
</ul>
|
https://electronics.stackexchange.com
|
395,028
|
[
"https://physics.stackexchange.com/questions/395028",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/185504/"
] |
I´ve searched about this but I cant really find an answer that satisfies me.
What happens that determines " This wave has frequency value equal to x Hz and a value of wavelength equal to y "
I cant seem to understand this... Does the wavelength has anything to do with length is there something moving up and down and each cycle is a complete wave ?
I´m very confused especially when I know that the wave analysis that we see in textbooks is a mathematical analysis and then I hear stuff like " The frequency of a wave is related to it´s energy"
And i get even more confused when I hear that "The wavelength of a wave can stop a wave from passing trough a hole with a diameter equal to x"
I can´t connect the mathematical side to physical side.
|
<blockquote>
Does the wavelength has anything to do with length is there something moving up and down and each cycle is a complete wave
</blockquote>
In classical EM, the answer is that the wavelength is the distance between the peak voltages in the electric field.
When you think of a water wave, it's easy to see the peak because its sticking up out of the water.
But consider a sound wave in metal... there's no physical peak in the same terms. In this case, the wavelength is the distance between the peak tension in the metal as the sound squeezes and stretches it. There's physical motion at the microscopic level, but you don't see it.
Classical EM works in the same fashion, it's the internal "tension" of the electric (or magnetic) field that you're measuring. Of course that begs the question what this field is that you're working on, but that's another question (or non-question, I guess)...
When this field passes a conductive material, the basic pattern of the field can be seen in the motion of the electrons in the conductor. That's how antennas create a signal that electronics can amplify. You won't see that either, but this is more directly like the sound case.
<blockquote>
The frequency of a wave is related to it´s energy
</blockquote>
Now that's something different. This statement is not true in classical EM, this is something that arises in QM. If the book does not make that clear, the book is crap.
<blockquote>
The wavelength of a wave can stop a wave from passing trough a hole with a diameter equal to x
</blockquote>
You're lucky this is so, otherwise you'd fry yourself looking through the holes in the metal plate in the door of a microwave!
So the reason for this is very complex, but it all boils down to the hole having to be in a material that interacts with the EM. So it's a metal plate in the door, not a plastic one.
Think about the old-school TV antennas you see, sadly hanging on rusting towers. Ever notice they always consist of a series of horizontal metal poles, with a clearly defined distance between them?
What happens is that any time you have a current in a conductor it radiates EM. So when a radio (TV) signal goes past the antenna it picks up that pattern and then <em>rebroadcasts it</em>. The strength of that signal is maximized if the rods, or <em>elements</em>, are a resonant length - which is why you have different lengths in a TV antenna to pick up different channels.
If you <em>space</em> the metal rods properly, that secondary signal will arrive at a given location exactly one wavelength after the "original" signal, and the two will constructively interfere. Put a bunch in a row and you amplify the signal. If you look closely at one of these antennas, you'll note the wires going to the TV actually only connect to one of the sets of rods, the others are passively adding to the signal through this interference process.
I hope that makes sense...
The same thing is happening in the plate, sort of. In this case the responding signal interferes <em>destructively</em> with the original signal on the back, and constructively on the front. The result is basically a mirror. But this only works when the holes are smaller than the wavelength and the plate has the right thickness. Microwaves are around 2.4 cm, so a hole that's a couple of mm is effectively like no hole at all. Light from the bulb inside is millions of times smaller, so it gets through no problem.
Handy, no?
Another example: have you noticed modern TV antennas don't look like the old-school ones? They have an X shaped <em>active element</em> at the front, and then a bunch of rods behind them. The rods are spaced close together, compared to the ~2m long signals they may as well be a solid sheet of metal. Yet the rods have a crapload less wind drag than a solid sheet.
Handy, no?
It's also much more complex than that, and you need to use all sorts of expansions and such to calculate it, but this is the basic idea.
|
My understanding is that what is moving up and down is the strength of the electric field at a particular point in space. EM radiation is generated by the mechanical vibration of charged particles. Remember a charged particle casts an electric field that decays as 1/r^2. When a charged particle moves, say in the +x direction, all of the points in space have to acquire new field values to reflect the new distance from the charged particle, but it takes time for the "information" that the particle has moved to reach these points in space. Along the x axis, as the charged particle vibrates back and forth, the electric field at points on the x axis is alternately increased and decreased as electromagnetic pulses propagating at the speed of light pass through points on the axis. At any particular point, the field will be alternately high and low. This is the EM wave and the frequency is based on the vibrational frequency of the charged particle. It is -somewhat- similar to splashing your hands in a pool and the frequency of the water waves correspond to the frequency of your splashing. For me it seems intuitive that the faster you splash or vibrate the particle, the more energy is being used to reorganize the electric field.
Einstein also predicted a similar effect with gravity waves which have now been measured with the motion of black holes.
|
https://physics.stackexchange.com
|
8,423
|
[
"https://cardano.stackexchange.com/questions/8423",
"https://cardano.stackexchange.com",
"https://cardano.stackexchange.com/users/5844/"
] |
I have a Daedalus wallet running which is a full node so in theory, I should get all the CLI functionality from that one install, how do I access it?
|
Daedalus has its own <code>cardano-node</code> instance, so you can specify the node's socket variable and use it for <code>cardano-cli</code> purposes.
First, launch Daedalus, and click on Help > Daedalus Diagnostics. Under the "Core Info" section, the "Daedalus State Directory" specifies the filepath that Daedalus uses on your computer. There should be a socket variable (likely named <code>cardano-node.socket</code>) in this directory which you can point to in your <code>bashrc</code> file.
In your CLI, run: <code>nano ~/.bashrc</code>
Now, scroll down and add the following line to the bashrc file:
<code>export CARDANO_NODE_SOCKET_PATH=<PATH_TO_SOCKET_IN_DAEDALUS_STATE_DIRECTORY></code>
Exit the bashrc file and run: <code>source ~/.bashrc</code>
Make sure cardano-cli is installed and is in your $PATH. You should now be able to run <code>cardano-cli</code> commands using Daedalus' <code>cardano-node</code> instance.
|
Go to the Deadalus menu, and you should see a menu item that says "Open a Marlowe terminal", and a terminal will open with Marlowe Cli installed.
|
https://cardano.stackexchange.com
|
139,677
|
[
"https://mathoverflow.net/questions/139677",
"https://mathoverflow.net",
"https://mathoverflow.net/users/8133/"
] |
DISCLAIMER: All pointclasses considered here are boldface.
Most of the time, when doing descriptive set theory, we want the projective sets to "behave well;" for example, maybe we don't want there to be nonmeasurable projective sets, or projective well orderings of $\mathbb{R}$, etc. Generally, this means making some (fairly conservative) large cardinal assumption, or equivalent.
At the far opposite end of things is the axiom that all sets are constructible, $V=L$. This axiom implies that there is a projective - in fact, $\Delta^1_2$ - well-ordering of the reals, and so projective sets become bad very early in the hierarchy.
My question is about the state of affairs when $V=L$ holds. My motivation is simply that I don't feel I have a good grasp on basic concepts in descriptive set theory, and the following seemed like a good test problem to assign myself; but I have thought about it for a while without making progress, so I'm asking here:
Let $\oplus$ be one of the usual pairing operators on $\omega^\omega$. For the purposes of this question, we say that a pointclass $\Gamma\subseteq \mathcal{P}(\omega^\omega)$ has the uniformization property if whenever $A\in \Gamma$, there is some $B\in \Gamma$ such that:
<ul>
<li>$B\subseteq A$, and</li>
<li>Whenever $x\oplus y\in A$, there is a unique $z$ such that $x\oplus z\in B$.</li>
</ul>
That is, we view $A$ as coding a relation on $\omega^\omega\times \omega^\omega$, and $B$ is the graph of a function contained in $A$. (This is not usually how uniformization is presented, but it's equivalent for all intents and purposes.) My question is then:
<blockquote>
Assume $V=L$. Let $D$ be the set of (boldface) $\Delta^1_2$ elements of $\omega^\omega$; does $D$ have the uniformization property?
</blockquote>
Now, it seems clear to me that $D$ should <strong>not</strong> have the uniformization property. [EDIT: As Joel's answer below shows, this is completely wrong.] The counterexample should be just the $\Delta^1_2$ well-ordering $\prec$ given by the assumption that $V=L$: uniformizing $\prec$ requires us to choose, for each real $r$, a real $s$ such that $r\prec s$; and although $\prec$ is $\Delta^1_2$, the usual way of doing this - choosing the immediate $\prec$-successor of $r$ - is no longer $\Delta^1_2$.
However, I don't know how to show that $\prec$ - or any other $\Delta^1_2$ set - cannot be uniformized in $\Delta^1_2$. I suspect I'm just missing something fairly simple.
<hr>
Note: it is known that the boldface pointclasses $\Pi^1_1$ and $\Sigma^1_2$ have the uniformization property, and assuming large cardinals, the uniformization property can be further propagated to every pointclass $\Pi^1_{2n+1}$, $\Sigma^1_{2n}$. On the other hand, the class $\Delta^1_1$ of Borel sets lacks the uniformization property, provably in $ZFC$.
|
Unless I am mistaken, it seems to me that $\Delta^1_2$ does have the uniformization property in $L$.
For any set $A$ in $\Delta^1_2$, let $B$ select the $L$-least witness on each slice. So $B$ unifomizes $A$, and the graph of $B$ appears to be $\Delta^1_2$, by the following reasoning:
<ul>
<li>$x\oplus z\in B$ if and only if it is in $A$, and for every well-founded countable model $M$ of $V=L$ containing $x$ and $z$, if $y$ is a real in $M$ preceding $z$, then $x\oplus y\notin A$.</li>
<li>$x\oplus z\in B$ if and only if it is in $A$, and there is a well-founded countable model $M$ of $V=L$ containing $x$ and $z$, if $y$ is a real in $M$ preceding $z$, then $x\oplus y\notin A$. </li>
</ul>
The point here is that the countable well-founded models are correct about the $L$-predecesors of the reals that they can see. So we can use any or all of them when verifying that $z$ is least such that some $\Delta^1_2$ property holds. Note that the "for every real $y$ in $M$" is merely a natural number quantifier, since $M$ is coded as a countable structure. So the first of these characterizations is $\Pi^1_2$ and the second is $\Sigma^1_2$, and so it is $\Delta^1_2$ overall.
|
Assuming $V=L$ then we have $AC$ and $CH$, so every set of reals is at most $\aleph_1$ Suslin. So we can find scales for them and uniformize them.In particular every $\Delta^1_2$ set of reals can be uniformized. As Joel said in the comment above this works for all $\Delta^1_n$ under $V=L$.
|
https://mathoverflow.net
|
45,449
|
[
"https://mathoverflow.net/questions/45449",
"https://mathoverflow.net",
"https://mathoverflow.net/users/756/"
] |
<strong>SymMonCat</strong> is the cartesian 2-category of symmetric monoidal categories, braided monoidal functors, and monoidal natural transformations. The terminal symmetric monoidal category <strong>1</strong> has one object $I$ and $I \otimes I = I$.
A category enriched over a monoidal category $V$ assigns to each pair of objects $X, Y$ an object hom$(X,Y)$ in $V$ and to each object $X$ a morphism $id_X:I \to \mbox{hom}(X,X)$ in $V$.
When $V = $ <strong>SymMonCat</strong>, the morphism $id_X:1 \to \mbox{hom}(X,X)$ is a braided monoidal functor; since monoidal functors preserve the monoidal unit and tensor product, it must map the unit $I$ in <strong>1</strong> to the unit $I$ in hom$(X,X)$.
Is there a different way of enriching over <strong>SymMonCat</strong> such that $id_X$ does not pick out the monoidal unit (other than considering it a subcategory of <strong>Cat</strong>)?
|
[Ignore this first part, I'm just leaving it for the context to the comments below.]
It is hard for me to understand why you would want to enrich in symmetric monoidal categories, have an identity, and also want this identity to <em>not</em> be the unit of the symmetric monoidal category.
That said, you can always do away with units altogether and consider "enriched categories without identities". Is this what you are after?
<hr>
After Mike's example I am now on board. What you probably want to do is enrich over the symmetric monoidal 2-category of symmetric monoidal categories where the monoidal structure is the "tensor product of symmetric monoidal categories". What is this you ask?
The functor category between two symmetric monoidal categories $Fun^\otimes(B,C)$ is naturally equipped with a symmetric monoidal structure (using pointwise multiplication). The tensor product of symmetric monoidal categories is $(-) \otimes B$ is the (weak) left adjoint to the functor $Fun(B, -)$. Thus $A \otimes B$ is a symmetric monoidal category such that symmetirc monoidal functors from it to C are the same as "bilinear" functors $A \times B \to C$. Now the monoidal unit for this tensor product is the free symmetric monoidal category on one object $\mathbb{F}$ (which is the category of finite sets and permutations).
In this way, if you enrich in (SymCat, $\otimes$) you get a unit being a functor $ \mathbb{F} \to Hom(a,a)$, which is equivalent to just some element, not necessarily the unit object of $Hom(a,a)$.
The prototypical example is the 2-category of symmetric monoidal categories itself.
|
I suppose that the monoidal structure for $ SymMonCat$ you mean is the cartesian one, and as braided functor you mean a pseudo-monidal symmetrical (funtors that commutes with the canonical isomorphism of symmetry, and the coherence morphism data are isomorphisms). Then $\mathcal{C}(X, X)\in SymMonCat$ has a internal symmetric monoidal structure and another monoidal structure given by monoidal moltiplication $\mathcal{C}(X, X)\times\mathcal{C}(X, X)\to \mathcal{C}(X, X)$ with the realtive axioms related to a $SymMonCat$-enriched category.
Then $\mathcal{C}(X, X)$ is a bimomoidal category, and the two “monidal Identity" for the two monoidal structures are $I_X$ (for the internal monoidal structure) and the (essential) image of the morphism you called $id_X$, and a very reduced example of this (de-categorification) is a $rig$, a algebraic structure by two monoidal law, one abelian. In general the two units are different (trivial example $(\mathbb{N}, +, 0 ; \times , 1)$.
|
https://mathoverflow.net
|
74,800
|
[
"https://cs.stackexchange.com/questions/74800",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/55524/"
] |
We define $\mathbf P$ as the set of problems solvable in polynomial time. We define $\mathbf{NP}$ as the set of problems with a verifier $ \in \mathbf P$.
Is there a name for problems whose verifiers are $\in \mathbf {NP}$ (e.g., $\mathbf{N(NP)}$)? I can't see this being a very <em>useful</em> complexity class, but, for example we have that $\mathbf{NP} \neq \mathbf{N(NP)} \implies \mathbf{P} \neq \mathbf{NP}$, so it might be an area of research for that reason alone.
|
Suppose you had a problem such that for any $x \in L$, there was a verifier $v$ such that $v$ could be checked against $x$ by a nondeterministic polynomial time algorithm. For a valid $(x, v)$ pair, there is some verifier $v'$ such that it takes polynomial time to check $((x, v), v')$ is a correct verification.
But then, you could simply combine $(v, v')$ into a single witness, and run the check in deterministic polynomial time.
Thus, the class ${\bf N(NP)}$ you describe is really just equal to ${\bf NP}$.
|
If a problem has a verifier you can guess it with nondeterministic TM so it is automatically in NP.
|
https://cs.stackexchange.com
|
28,061
|
[
"https://quant.stackexchange.com/questions/28061",
"https://quant.stackexchange.com",
"https://quant.stackexchange.com/users/5641/"
] |
I'm self studying for an actuarial exam and I am curious about a property of the antithetic variate method for increasing the Monte Carlo price accuracy (i.e. For every random draw of $z$, also include a draw of $-z$ in the simulation).
<strong>Question:</strong>
Assume the Black-Scholes framework and consider a European call option with strike $K$ expiring in $T$ years on a non-dividend paying stock currently priced at $S_0$ with an annual volatility $\sigma$. Suppose that a Monte Carlo simulation is used to estimate the expected value at expiration of the option.
The simulation was performed using $n$ draws $u_1, u_2, ..., u_n$ from a uniform distribution to generate the stock price. Suppose that each of these draws generates a stock price at expiration which gives a zero payoff for the call option and therefore $E(\text{Payoff}) = \frac{1}{n} \sum_{i = 1}^n C(S_T^i, K, T) = 0$, where $S_T^i$ is the stock price at expiration for the $i$th draw.
Using the same uniform draws, and applying the antithetic variate method, will $E(\text{Payoff}) = \frac{1}{2n} \sum_{i = 1}^{2n} C(S_T^i, K, T) > 0$ necessarily?
My intuition says yes, but I don't have a way of convincing myself why.
|
No, you can have
$$
\frac{1}{2n}\sum_{i=1}^{2n} C(S^i_T,K,T) = 0
$$
First off, there's the obvious case where $n=1$ and $u_1 = 0.5$
More generally, for options way out of the money it is common to have
$$
\frac{1}{n}\sum_{i=1}^{n} C(S^i_T,K,T) = 0
$$
even for very large $n$. Antithetic sampling does not change that.
|
No. Antithetic variable method is usually for generating smaller standard error than your non-antithetic method, which is a direct result of the negative correlation between original variable and the antithetic variable.
For OTM option, there definitely will be a lot of path ending up with value 0. What may be a choice is to use importance sampling.
Write out the expectation under RN measure and manually extract another normal density with higher mean (in this case). Then use Monte Carlo method to get this new expectation. You can certainly apply antithetic method to it also to reduce the SE.
|
https://quant.stackexchange.com
|
1,118,226
|
[
"https://math.stackexchange.com/questions/1118226",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/203535/"
] |
I read that for $y=ax^2+bx+c$ is a quadratic function where $a\neq0$, but is it true that $a$ really can't be zero? I think it is because if $a$ <strong>was</strong> zero, there wouldn't be a parabola. There would just be a flat line, so then it wouldn't be quadratic because the $x^2$-term indicates if the parabola opens upward or downward. Is this right or is it true about what I asked?
|
If $a=0$, you no longer have a parabola.
Instead, you have a line: $y = bx+c$, with slope equal to $b$, and a $y$-intercept at $c$.
|
I guess I'm right. If $a$ is zero, then $y=ax^2+bx+c$ would change to $y=bx+c$, leaving $b$ as the slope and $c$ as the y-intercept, leaving a flat line, not a parabola.
|
https://math.stackexchange.com
|
316,555
|
[
"https://softwareengineering.stackexchange.com/questions/316555",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/113314/"
] |
In C++ I frequently see these two signatures used seemingly interchangeably:
<pre><code>void fill_array(Array<Type>* array_to_fill);
Array<Type>* filled_array();
</code></pre>
I imagine there is a subtle difference, but I don't know what it is.
Could someone explain when I might prefer one form over the other?
|
The first kind of signature is usually preferable.
The difference is that the second signature requires the array to be created inside the function. In particular, the second signature effectively requires the array to <strong>outlive the scope in which it was created</strong>. So what we're really comparing are these two snippets:
<pre><code>function foo1() {
Array<Type>* array = /* allocate memory and call constructors */
fill_array(array);
do_stuff_with_array(array);
/* free memory and call destructors */
}
function foo2() {
Array<Type>* array = filled_array(); /* allocation/constructors happen elsewhere */
do_stuff_with_array(array);
/* free memory and call destructors */
}
</code></pre>
The second version is potentially problematic for a few reasons:
<ol>
<li><strong>It's error prone.</strong> Functions which return pointers or references to something they created are very easy to get wrong, either in the form of undefined behavior or in the form of a completely unnecessary performance loss. Since you're working with raw pointers, it's easy to invoke undefined behavior by returning a pointer to a local variable that's no longer valid after the function has returned. If the array was being passed around as a regular object or a reference instead, you might suffer an expensive copy when <code>filled_array()</code> returns (the details of when this may or may not happen are complicated, see StackOverflow for all the gory details).</li>
<li><strong>You don't know how <code>filled_array()</code> allocated the memory for the array</strong>, so in principle you don't know how to deallocate it correctly. You may be able to get away with assuming it was allocated "normally", but if you don't control the allocation yourself, you just don't know for sure. It's possible some custom allocator was used, and it's also possible the pointer was saved somewhere so that a totally different function can do the deallocation at a specific time later in the pipeline (I believe this is common in C libraries). While a function that accepts a pointer as an argument could theoretically do this, it's far less likely.</li>
<li><strong>Memory/object reuse.</strong> What if I already have memory allocated for an Array, or an actual Array, when it comes time to call <code>filled_array()</code>? Unfortunately, <code>filled_array()</code> controls both the memory allocation and the value generation logic, so it's going to allocate more memory whether or not you need it. If you have many functions like this in a row, you're potentially wasting a huge amount of time and memory on allocations that could be completely skipped if you instead accepted pointers or references to memory controlled by client code. Or more concisely: Avoid writing functions that decide how memory should be managed <em>and</em> do something else with that same memory. Single responsibility principle and all that.</li>
</ol>
Of course, you <em>should</em> be passing the Array around by reference rather than pointer. And you should be using RAII objects (whether that means "just an Array" or a smart pointer to an Array) as much as possible so that all the allocation and deallocation is managed for you. But these arguments for creating the object at the correct scope still apply, since switching to references and RAII objects alone may only change correctness bugs into performance "bugs" (some of which move semantics can't automagically fix).
|
It seems most likely that the second one returns <code>Array<Type></code> and not <code>Array<Type>*</code>. In the first case, there is an <code>Array<Type></code> somewhere and you pass a pointer to it, so the function can fill it. In the second case, the function creates an object and returns it (unless the type is <code>Array<Type>*</code> and I don't know what's going on).
If you use constructors with rvalue reference, the second is just as efficient but much more readable. Without rvalue references (older C++ versions) the first version was much more efficient, because the second needs to create a completely pointless copy of the value returned.
Clarification: I'm assuming that the second function returns an Array, not a pointer to an Array. Returning a pointer to an array would mean there must be an Array somewhere. So where is it? It doesn't make sense. But if the function returns an Array, that Array would most likely be created as a local variable inside the function. Then an assignment constructor is called to <strong>copy</strong> it to an object inside the calling function, and the original is destroyed. And there is the copy that you can avoid in new C++ versions.
|
https://softwareengineering.stackexchange.com
|
156,874
|
[
"https://dba.stackexchange.com/questions/156874",
"https://dba.stackexchange.com",
"https://dba.stackexchange.com/users/111846/"
] |
My question is regarding database design/architecture, but I'll use a familiar example to explain it.
Suppose there is a database for banking. This database has a table called <code>Customers</code> which stores <code>ID</code>, <code>Name</code>, <code>Address</code>, etc.. Now each of these customers can have their own sub-table for <code>Transactions</code>, with columns like <code>Transaction ID</code>, <code>Date</code>, <code>Time</code>, <code>Amount</code>, etc.
My question is, what is the efficient way to store this <code>Transactions</code> table. Should I create a new table in the database, add a field <code>User ID</code>, and insert transactions of ALL customers in this table. Or should I add a text field <code>Transactions</code> to the table Customers and store all transactions of each user in JSON format? I'd like to know how this is done in professional industries.
|
The following should help with the execution time:
<ul>
<li>remove the <code>ORDER BY</code> if it isn't strictly necessary</li>
<li>replace the join of <code>dr</code> table with <code>WHERE EXISTS (SELECT 1 FROM tms_door_record_raw As dr WHERE c.card_no = dr.card_no AND dr.record_time BETWEEN '2016-11-01' AND '2016-11-02')</code></li>
<li><code>GROUP BY</code>, may now not be needed</li>
<li>widen the index on <code>tms_door_record_raw</code> to include both <code>card_no</code> and <code>record_time</code></li>
</ul>
Test this and see if progress is being made. Further steps may be necessary but hopefully this is in the right direction.
|
Remove the GROUP BY.<br>
If you have (logicaly valid) duplicates remove them at early stage.
|
https://dba.stackexchange.com
|
169,485
|
[
"https://electronics.stackexchange.com/questions/169485",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/32541/"
] |
I'm reading the first chapter of the AoE. I've come across this section on differentiator/integrator circuits and couldn't understand the math behind it.
<img src="https://i.stack.imgur.com/pNe7M.png" alt="differntiator">
<img src="https://i.stack.imgur.com/2cM4z.png" alt="integrator">
For the first picture, it says small RC means dV/dt being much lower than dVin/dt, but I don't understand how it does this. Similarly, I'm not sure how large RC means Vin >> V. I know this may be a petty question, so please be patient with me.
|
What they mean is that a passive R/C filter can only approximate a differentiator/integrator so long as the time constant is much slower than the signal. The reason for this is that the true behavior of an R/C and R/L circuit is exponential in time e.g. from basic circuit theory, the general response of an RC circuit is $$V=V_0 (1-e^{(-t/RC)})$$ If RC is large, e^x is close to linear for small values of t, yielding behavior close to an ideal integrator/differentiator.
Another way to think about it is to consider the integrator case in 1.15 with a constant voltage input (e.g. Vin = 10 V). We expect the output to be a linear ramp of constant slope (integration of a constant = straight line). However, if RC is too small, what happens is that after a time of integration, V will increase due to the capacitor C charging up. This will decrease the current through R, which in turn decreases the "slope" the output voltage. At some point, when V = Vin the integrator stops working completely. This is how the behavior of the passive integrator deviates from the ideal integrator. Conversely if RC is large enough, the voltage across capacitor C will never get large enough to reduce the current through resistor R, and so the current through resistor R will be approximately constant, behaving as an integrator.
Note that you can use op-amps and other active components to force the current through the resistor (in 1.15) to be constant, this is how the "ideal integrator" circuit below works:
<img src="https://i.stack.imgur.com/q8iSJ.jpg" alt="enter image description here">
|
As an alternative, you can watch the phase response of the circuits.
Remember that an ideal differentiating (integrating) process requires a phase shift between input and output of +90deg (-90deg). A passive C-R resp. R-C combination allows these values for infinite frequencies only. Hence, only an approximation of the required operation is possible for relatively large frequencies only (far above 1/RC).
As an active example, the shown active (inverting) integrator enables the required phase shift (with minor errors) within a relatively broad frequency band - however with a value of +90deg (due to the inverting property).
|
https://electronics.stackexchange.com
|
178,582
|
[
"https://physics.stackexchange.com/questions/178582",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/27123/"
] |
I've previously learned that massive particles cannot achieve the speed of light.
But recently I read that, concerning the gels that refract and bounce light within around enough that it can travel at worldly speeds, and by extension how electromagnetism propagates through matter, that pure photons are thought of as interacting with the massive objects, gaining mass and moving at a speed less than the speed of light as a result.
I'm not sure if this is just how the paradigm is set up to frame the phenomena, or if this paradigm carries over seamlessly into other accepted theories elsewhere in science, but this made me think of the possibility of the exclusivity of mass and velocity applies not only to <em>particles with speeds at c being unable to have mass</em>, but also the converse, <em>particles without mass being unable to have speeds less than c</em>.
My question about the properties of all particles, with syntactical brevity:
<blockquote>
Mass exclusive-or speed of light is true?
</blockquote>
|
First, we will look at the energy of a free relativistic particle of (rest) mass $m$ moving with velocity $v$:
$$E = \frac{mc^2}{\sqrt{1-\frac{v^2}{c^2}}}$$
where $E=mc^2$ when $v=0$. We now consider a few cases:
<ol>
<li>$m\ne0$:
In this case, $E\rightarrow\infty$ as $v\rightarrow c$. Therefore, a massive particle that at any point of time is moving at less than the speed of light cannot practically be accelerated to achieve the speed of light. A massive particle that has always existed with $v$ = $c$ however, has infinite energy - this leads to trouble, like infinite transfers of energy to ordinary matter being possible in interactions all over its path - and is therefore considered unphysical.</li>
<li>$m\rightarrow0$:
Here, we demand that the particle has a finite energy (for various reasons such as its being able to produce some measurable effects) in spite of its vanishing mass, and this can only be possible if $v=c$, so that $mc^2 = E\sqrt{1-\frac{v^2}{c^2}} = 0$. In these cases, the energy is often determined by other things. For example, a photon has an energy determined by its frequency $\nu$: $$E = h\nu$$</li>
</ol>
So, at least from an energy point of view, massless particles can only ever travel at the speed of light, and massive particles only at lesser speeds. This conclusion, of course, assumes that the above expression for energy also holds for massless particles. While a potentially suspect assumption, it is ok if we expect a smooth transition between the massless case and the limiting case of very low mass.
|
Every particle needs to have energy to be a particle (if it had none it wouldn't even exist). Since energy is equivalent to mass and therefore gravitates I would say YES, all particles that have a speed less than the speed of light must also have mass.
Because the speed of the particle is less than the speed of light an observer could travel with the same velocity as your particle and experience the particle's rest mass (in contrast to photons, which have energy and gravitate, but have no rest mass).
|
https://physics.stackexchange.com
|
335,218
|
[
"https://softwareengineering.stackexchange.com/questions/335218",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/157012/"
] |
From Wikipedia on Single responsibility principle SoC
<blockquote>
... class should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by the class. All its services should be narrowly aligned with that responsibility
class or module should have one, and only one, responsibility . As an example, consider a module that compiles and prints a
report. Imagine such a module can be changed for two reasons. First,
the content of the report could change. Second, the format of the
report could change. These two things change for very different
causes; one substantive, and one cosmetic. The single responsibility
principle says that these two aspects of the problem are really two
separate responsibilities, and should therefore be in separate classes
or modules. It would be a bad design to couple two things that change
for different reasons at different times.
The reason it is important to keep a class focused on a single concern
is that it makes the class more robust. Continuing with the foregoing
example, if there is a change to the report compilation process, there
is greater danger that the printing code will break if it is part of
the same class.
</blockquote>
Is there a rule or general guideline on how to define responsibility of methods in class?
When we talk about objects inanimate objects its easy to see compile report and print report is two different responsibilities. However when it comes to animate objects like dog it becomes not so evident what methods are different in nature.
Say you have:
<pre><code>class dog {
public $breed;
public $age;
public $color;
function bark($loudness) (
...
)
function pee($amount) (
...
)
function run($speed, $distance, $direction) (
...
)
function growl() (
...
)
function drink() (
...
)
}
</code></pre>
how can you tell what responsibility should be in a separate class?
|
<h2>General</h2>
Single Responsibility has something to say about semantic redundancy and false semantic cohesion of code fragments (modules, classes, methods functions). The problem is (with all SOLID principles) <strong>it's not about applying them, it's about identifying a violation them</strong>. Once you think it violates SRP you can make a thought experiment (coordinated with your business people) your code has to pass to verify your assumption.
<h3>Semantic redundancy</h3>
If a functional requirement occurs and you have to change two code fragments for the same reason you violate the SRP.
This often happens if a developer does not know that some logic already exists that will solve a probem for him and he develops it once again.
<h3>False semantic cohesion</h3>
If a functional requirement occurs and you produce an unintentional side effect in another code fragment you violate the SRP.
This happens if technically savvy developers try to reduce code duplication without consider semantics.
E.g. a local part check for an email address was implemented at different locations of your code base. A developer extracts the code into one central method because it looks technically equal. Another time another developer has to change the local part check for one case and he does not check if all other usages will be correct under the new implementation.
<h2>Conclusion</h2>
Duplicate code may be a reason to think of SRP violation. But you have to be careful if you put code together that seems to be equal.
You cannot escape from semantics. You have to identify what the code really means in the context to decide if there is a violation of SRP.
You can break it down to: You have to put things together that belong together and you have to isolate thing that do not belong together. But this is an identification process.
In your example you have a Dog class with several methods that represent actions of a dog. If you are paranoid you can see your first redundancies: All methods can be "executed". So "Execution" is what they all have in common. But we have a hard time to extract this part as the programming language itself takes care of the execution. So there are lower boundaries to SRP that are related to the used programming language.
Ok. Your dog barks, pees, drinks, growls and runs. I only can say, that I currently see no violation of SRP. But that doesn't mean there isn't. Once you find an indicator that may be semantically redundant or false in the context of semantic cohesion you can ignite.
After all I want to say that sometimes some SRP violation can be tolerated as the code is expected to be stable in changes. But this is only about to be pragmatic and never a rule.
|
The responsibility here is <code>dog</code>. Dog is as dog does.
Many people read the Single Responsibility Principle as <em>"Must do only one thing."</em> That's not what it means. Bob Martin, author of the principle (and prolific source of mass confusion for inexperienced programmers everywhere) says it like this:
<blockquote>
Every class should have only one reason to change.
</blockquote>
I view a class as embodying a "purpose" or "area of responsibility." If you write a class that contains <code>Create</code>, <code>Read</code>, <code>Update</code> and <code>Delete</code> methods, those aren't four responsibilities; they are just one: <em>Data Access.</em>
|
https://softwareengineering.stackexchange.com
|
62,265
|
[
"https://mathoverflow.net/questions/62265",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2284/"
] |
This semester, I teach an introduction to probability course tailored for students with no science background and so with very <em>very</em> little prerequisites. We started with the basics of analytic combinatorics then moved on to random variables and the study of common laws (binomial, hypergeometric, geometric, Poisson). The audience being what it is, I try to avoid as much as possible calculus derivations of probability facts.
For some aspects of the course, it worked out well (for instance the derivation of the expectation of the binomial law) but because I am barely more knowledgeable than my students when it comes to probability, I have been unable to answer this question:
<blockquote>
Is there a set of natural probability properties which characterizes the discrete Poisson law?
</blockquote>
If yes, then I could use this as a definition of the Poisson law, which would suit my students better than saying "it's the law such that <span class="math-container">$P(X=k)=e^{-\lambda}\frac{\lambda^{k}}{k!}$</span>". By natural above, I want to convey the meaning that I hope they can be formulated using natural language (like, say, memorylessness) rather than using analytic objects.
More precisely, what I have in mind is the following:
<blockquote>
Is there such a set of properties which would make it at least a little intuitively plausible that the sum of two variables following Poisson law also follows Poisson law?
</blockquote>
Of course, the proof of the above fact is completely elementary, but it would still be above the level of everyone in the audience except perhaps the 3 top students.
Note that I would be happy even if proving that this set of properties characterize Poisson law turned out to be much harder than anything I will do in this course (or even much harder than anything I know myself about probability), because what I am looking for is not logical rigour but rather psychological efficiency: in 10 years, my students will have completely forgotten what a derivative is, but I would like them to be able to recollect something if confronted with an epidemiological survey using random variables (at least my most successful students use this course to strengthen their math knowledge before studying medicine).
I realize this question is very elementary, and would understand if it is deemed inappropriate, but the standard references I might consult on the subject will invariably (and with good reasons) develop much more calculus that my students will ever know before dealing with such questions (typically, they will characterize the Poisson law as the limit of the binomial law via Stirling's formula).
|
What about the characterization of Poisson point processes ?
Let us consider a counting process $(N(t))_{t \ge 0}$. That is, $N(0)=0$, $N(t)$ only increases by jump of height $1$, and is right continuous. You can see $N(t)$ as the number of points of a random set in $]0,t]$.
Then $N(t)_t$ is a homogeneous poisson point process if and only if :
1) the increments are independent
2) the increments are stationary : $N(t+s)-N(t)$ has the same law as $N(s)$.
(Maybe there is a further regularity assumption).
This implies that the increments are Poisson distributed : there exists $\lambda$ such that $N(s)$ is distributed according to Poisson with parameter $s\lambda$ for all $s$. This shows that under seemingly general conditions the Poisson distribution appears.
You can also see this from a more geometrical point of view, by considering more general point process than on the line.
|
We can't expect a completely finite way for the Poisson distribution to arise, since the number $e$ must come from somewhere. On the other hand, it should definitely not be necessary to introduce Stirling's formula.
I think the most natural approach is to define Poisson($\lambda$) as the limit distribution of the number of heads in a sequence of $N$ independent flips of a biased coin with probability $\lambda/N$ of heads.
This must be accompanied by some derivation showing that there is such a limit, and leading to the formula you state. Such a derivation will use binomial coefficients and the definition of the number $e$, but not much more.
But even without the derivation, if we just assume that the limit exists, it shows why the sum of two independent Poisson variables is again Poisson.
|
https://mathoverflow.net
|
28,118
|
[
"https://physics.stackexchange.com/questions/28118",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9129/"
] |
The theory I've recently come to postulates that:
<ol>
<li>The volume of space filling the universe is finite and is constantly growing, thus the boundaries of the universe are constantly expanding.</li>
<li>The expansion of the universe's boundaries is caused by light that is converted into fresh space while reaching the universe's boundaries from within.</li>
<li>The expansion of the universe's matter is caused not by kinetic reasons of a big bang somewhere in a distant past but instead by the tendency of the matter to distribute itself evenly across the ever-increasing volume of space (which is perhaps connected to the cosmological constant).</li>
</ol>
Well, is it worth any constructive discussion (any existing theories if this kind?) or is it another example of why amateur physicists should not post their lunatic theories on this forum? And I like the second postulate best. Is there a way for it to branch into a separate theory if the combination of the three doesn't work out?
|
What you are describing looks like a hypothesis to me. A hypothesis is an idea. You have an idea. A theory, in the sense it is used by modern physics, is an idea about how the universe works which is supported by some rigorous elements, whether we're talking about some mathematical explorations (such as in the case of string theory, or back in the day, relativity), or observations (early chemical experiments). A theory has withstood challenges and attempts to discredit it by very knowledgeable and determined people.
In short, you should explore your ideas, but you should always have a way to eliminate invalid ideas by testing them or finding their flaws.
Most important of all, a theory should be considered nill (not bad, just without proven utility) if there are no justifications for it outside of your own intuition. Having said that, if your intuition tells you there is a good chance of there being something there, then you definitely should try developing it.
The important aspects of a new theory should be:
A) it explains our universe as well or better as existing theories
B) explains currently unexplained things
C) can make predictions about things not yet observed.
If such predictions from your theory turn out to be false, then your theory has been falsified and you have competently practiced science. If on the other hand the predictions are accurate, then you have joined the club of frontier physicists!
The better a theory is, the more it seems valid as a function of how much people try to tear it apart, and fail. This process often leads to additional unexpected discovery.
Don't get discouraged by your first or currently favorite idea not turning out to be good - in a class I took a few years ago on evolutionary computation, our teacher told us that Einstein invented and mentally tested several ideas PER MINUTE when he was a patent clerk. This is less impressive (yet still very impressive) than it sounds if you understand that in science often discovery is a process of search-and-evaluate. Ideas are a dime a dozen, quite literally. What makes good ideas live longer than bad ones is that they withstand scrutiny.
The toolkit that competent scientists have which many regular citizens lack is refined training for knowing what is plausible and what not, in the physical world. You can have a theory that contradicts existing theories, but it had better be EXPLANATORY, not just POSTULATORY. Because existing theories explain a lot, a new theory that hopes to replace existing ones needs to be better at everything the other theories do.
Harsh, honest and educated scrutiny is the best way to sort good ideas from bad ones.
Good luck!
|
The current theory for describing the large scale structure of the universe is General Relativity and in particular the FLRW metric. GR gives us a set of equations into which we can feed experimental data and from which get predictions. So far the predictions have agreed with every experiment we've done, and that's why we all believe GR.
To get anyone to take your ideas seriously you'll have to make them quantitative. To take your suggestion 2, what equations describe the conversions of photons into space? How do these equations avoid perturbing the behaviour of photons we observe in the lab? To take your suggestion 3, we know all matter/energy attacts all other matter/energy due to gravitation so matter tends to clump together not spread apart as you suggest - that's why we see stars and galaxies. What equations describe your suggestion that matter tends to spread itself evenly, and why don't we see deviations from the matter distribution predicted by gravity?
It's fun to think about new theories, and even the most old and boring of us did this as excitable young students, but most of us have had to admit that Einstein did it first and he did it best. Unless you can firm up your ideas to something I can do calculations with, you're unlikely to get many physicists to be interested.
|
https://physics.stackexchange.com
|
148,799
|
[
"https://softwareengineering.stackexchange.com/questions/148799",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2673/"
] |
<strong>When we say 'documentation' for a software product, what does that include and what shouldn't that include?</strong>
For example, a recent question asked if comments are considered documentation?
But there are many other areas that this is a valid question for as well, some more obvious than others:
<ul>
<li>Manuals (obviously)</li>
<li>Release notes?</li>
<li>Tutorials</li>
<li>Comments</li>
<li>Any others?</li>
</ul>
Where is the line drawn. For example, if 'tutorial' is documentation, is 'video tutorial' documentation, or is it something else?
Generally, something in software isn't 'done' until it is implemented, tested and documented. Hence this question, what things should we consider as part of documention to consider something 'done'.
<hr>
<sub>Question inspire from recent customer feedback at our conference indicating that our doc needed more 'samples', which we previously hadn't been as good at considering as we should have been.</sub>
<sub>Audience: Software developers using our database(s), programming languages and associated tooling (such as admin clients to said DB)</sub>
|
The goal of documentation is to describe and explain the software product, so you could define the documentation to be the set of artefacts that contribute to that description or explanation. You'd probably not consider related <em>actions</em> as part of the documentation: e.g. a week-long training course is not documentation but the course materials are; a five-minute whiteboard chat is not documentation but an image of the whiteboard is.
Keeping the goal (explaining the software) in mind, the documentation is finished when the <em>customer is satisfied with the explanation</em>: just as the software is finished when the customer is satisfied with the software. Bear in mind that the customer for the documentation is not always the same as the customer paying for the software: support personnel, testers, salespeople and others will all need some understanding of what the software does and how it works.
This helps understand where your boundary for what should be included in the documentation lies. Using "reader" as a convenient shorthand, though we accept that videos or audio could be included: anything that helps the reader gain the information they need about the software is documentation they can use, everything else isn't. If your customer needs detailed walk-throughs of all their use cases, then that needs to be part of the documentation. Your developers probably need source code, information about your version control repository and build instructions: that's documentation for them, but as described above it wouldn't be part of the customer's documentation.
|
I think you took away the wrong part from your conversation at a conference. Modern software development methodologies advocate that the development team should be working closely with its customers (or a product owner who's acting as a customer proxy). For all work delivered, definition of "done" is something that is negotiated between the team and its customer and is done on recurring basis as the software is being developed.
The problem you ran into is that you had a disconnect between what you assumed the customer needed and what they expected you to deliver so at the end you got the "hey where are all the samples" surprise.
As far as, what is documentation... well, it's pretty much everything you listed and maybe few more things that you didn't. But nobody can tell you how much documentation your project needs. Every project is different and it is up to your team, your product owner and your customers to determine the level and type of documentation that is required for your project.
Some factors that would come into play:
<ul>
<li>Are you developing software v1.0 and them moving on to other projects or is this on-going, long term project? Comments/design docs become much more important in the latter case. On the other hand if your customer is a mom-and-pop donut shop and you are writing a website for them that you will never see... well I guess code documentation is nice but not that important.</li>
<li>Are you developing an mobile game or software that is controlling heart rate monitor in a hospital. Guess which one will have definition of "done" that has much more documentation?</li>
<li>Are your customers typical end-users or are they other developers? Do you have an API/SDK that you are exposing?</li>
<li>What is the level of expertise of your customers? This affects the choice of video tutorial vs. written material vs. some kind of interactive tutorial app</li>
<li>Do your customers care about what changed from version to version. Some do. Most don't. For few it's the law (or close to that) to care.</li>
</ul>
|
https://softwareengineering.stackexchange.com
|
477,858
|
[
"https://physics.stackexchange.com/questions/477858",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/195488/"
] |
Can someone in simple terms why you would use Nyquist frequency limits when processing a signal? What benefit does it provide, and how does it affect the results? And how does it relate to the Nyquist rate?
|
The Nyquist sampling theorem (sometimes called the Nyquist-Shannon sampling theorem) says, if you have a signal that is bandlimited with bandwidth <span class="math-container">$B$</span>, then if you sample it with a sampling period <span class="math-container">$T_s$</span> <em>strictly</em> less than <span class="math-container">$1/2B$</span>, then the original signal can be perfectly reconstructed from the samples.
We call the minimum sampling rate for ideal reconstruction, <span class="math-container">$f_N = 2B$</span> (<span class="math-container">$f_N$</span> being in samples per second and <span class="math-container">$B$</span> in hertz), the <em>Nyquist limit</em>.
If you sample a signal with a sample rate greater than the Nyquist limit, it is (in principle) possible to perfectly reconstruct the original continuous-time signal.
If you sample a signal with a sample rate below the Nyquist limit, you can not perfectly reconstruct the signal due to aliasing.
So if you want to retain "complete" information about the signal you are sampling, you must sample above the Nyquist limit.
|
As suggested by a comment, you should give us some context. There a several ways to answer the question. I'll provide a partial answer that is particularly relevant to physics, esp discrete periodic systems such as atoms in a solid. But there are other aspects, especially as concerns continuous systems and time-domain questions.
For a discrete periodic system, the Nyquist frequency is the highest frequency possible. There are no higher frequencies.
It's easier to visualize space rather than time :-) so let's consider spatial frequencies. Imagine a system of balls threaded on a string, with the distance between neighboring balls the same for all neighboring pairs. A low spatial frequency / long wavelength looks like a wave. As you increase the spatial frequency, the wavelength gets shorter. Continue increasing the spatial frequency until the wavelength is two times the ball spacing. At that frequency one ball is displaced "left", the next "right", the third "left" again, etc. Every other ball is displaced maximally to the left or right.
There is no higher spatial frequency. Any arbitrary waveform on the system will consist of a sum of frequencies up the maximum. There simply are no other frequencies to consider.
From a physics point of view, there is no consequence. From a math point of view, summations are limited, and algorithms to speed the calculation ("Fast Fourier Transform") can be used.
|
https://physics.stackexchange.com
|
705,179
|
[
"https://physics.stackexchange.com/questions/705179",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/123113/"
] |
My understanding of the flatness problem is that it says that if we leave out dark energy and inflation, then the density parameter <span class="math-container">$\Omega(t)$</span> tends to <span class="math-container">$\infty$</span> or <span class="math-container">$0$</span> unless we have <span class="math-container">$\Omega(t) = 1$</span> exactly. Thus, <span class="math-container">$\Omega(t) = 1$</span> is an unstable equilibrium point, making it very strange to observe <span class="math-container">$\Omega(t_{0})\approx 1$</span> today.
My question is, why is this called the "flatness problem?" I don't see the connection to geometry or curvature.
I understand that if <span class="math-container">$\Omega(t_{0})$</span> is close to <span class="math-container">$1$</span>, then <span class="math-container">$\Omega_{K}(t_{0})\equiv 1-\Omega(t_{0})$</span> would be close to zero, but how does this relate to the actual curvature value <span class="math-container">$K$</span>? In particular, isn't <span class="math-container">$K$</span> supposed to be constant (so the deviation from flatness is fixed)?
|
Relation between curvature <span class="math-container">$k$</span> and density parameter <span class="math-container">$\Omega$</span> can be described with 1st Friedmann equation.
<span class="math-container">$$(\frac{\dot{a}}{a})^2 +\frac{kc^2}{a^2} = \frac{ 8\pi G }{3}\rho$$</span>
Define Hubble parameter be <span class="math-container">$H = \dot{a}/a$</span>, and density parameter be <span class="math-container">$\Omega = 8\pi G \rho/3H^2$</span>, then comparison between <span class="math-container">$\Omega$</span> and 1 has same meaning with comparison between <span class="math-container">$k$</span> and 0.
<span class="math-container">$$\frac{kc^2}{a^2 H^2} = \Omega-1 $$</span>
It says <span class="math-container">$|\Omega-1| \propto 1/\dot{a}^2$</span>. If there is no inflation, <span class="math-container">$\dot{a}^2$</span> will decrease, and <span class="math-container">$\Omega$</span> increases. Our current observations said <span class="math-container">$\Omega \simeq 1 $</span>, so density parameter has to be closer to 1 in the early universe stage. It is called flatness problem.
|
As you pointed out, as we go back in time, <span class="math-container">$\Omega_{\rm total}-1$</span> needs to be very small (i.e., <span class="math-container">$|\Omega_{\rm total}-1| \propto 10^{-61}$</span>)
This situation brings to mind the following question;
Why should the universe have started from such a unique situation?
In other words, why would the universe initially started from <span class="math-container">$|\Omega_{\rm total} -1| = 0.00000...000001$</span> when it could have taken different <span class="math-container">$\Omega_{\rm total}$</span> values?
The situation can be viewed from the following perspective;
If the universe took any other value, we wouldn't be here, so it had to take that value (anthropic principle). But physicists don't like the anthropic principle. So the only solution is to come up with the idea that for any initial <span class="math-container">$\Omega_{\rm total}$</span> value, the universe would end up with <span class="math-container">$\Omega_{\rm total} \approx 1$</span>. The inflation mechanism provides that.
|
https://physics.stackexchange.com
|
205,712
|
[
"https://math.stackexchange.com/questions/205712",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/43306/"
] |
I have a set of N random integers between A and B.
Assuming that my random number generator is equally likely to return any integer between A and B, how can I calculate the probability that the next random integer is already present in my set?
I want to estimate how many random numbers I should generate in a batch such that I can say with probability P that atleast one of the new integers does not already exist in the set.
Thanks
|
To complete your argument you have to show that $\ln(p_n)/\ln(n) = 1.$
Now, <em>if</em> you have that $p_n \sim n\ln (n),$ then $\ln(p_n) = \ln(n) + \ln\ln(n) + o(1)$, and so indeed $\ln(p_n)/\ln(n) \to 1$ as $n \to \infty$. So at least
this is consistent with what you are trying to prove.
On the other other hand, what you know is that
$n\ln(p_n)/p_n \to 1$ as $n \to \infty$. Taking $\ln$ gives
$$\ln(n) + \ln\ln(p_n) = \ln(p_n) + o(1),$$
and so
$$\ln(n)/\ln(p_n) + \ln\ln(p_n)/\ln(p_n) = 1 + o(1).$$
Can you finish from there?
|
There is a problem with this approach. You took $x = p_n$, but that means $\ln(x) = \ln(p_n)$ and <i>not</i> $\ln(n)$.
|
https://math.stackexchange.com
|
936,611
|
[
"https://math.stackexchange.com/questions/936611",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/58742/"
] |
let $p,q$ is postive integer,and such
$$\dfrac{95}{36}>\dfrac{p}{q}>\dfrac{96}{37}$$
Find the minimum of the $q$
maybe can use
$$95q>36p$$
and $$37p>96q$$
and then find this minimum of the value?
before I find a
$$2.638\approx \dfrac{95}{36}>\dfrac{49}{18}\approx 2.722>\dfrac{96}{37}\approx 2.59 $$ is not such condition
idea 2: since
$$\dfrac{95}{36}=\dfrac{95\cdot 37}{36\cdot 37}=\dfrac{3515}{1332}$$
$$\dfrac{96}{37}=\dfrac{96\cdot 36}{36\cdot 37}=\dfrac{3456}{1332}$$
so
$$\dfrac{3515}{1332}>\dfrac{p}{q}>\dfrac{3456}{1332}$$
so
$$p\in(3456,3515),q=1332$$
|
An interesting trick so solve such kind of problems is to consider the continued fraction of the LHS and the RHS. We have:
$$\frac{95}{36}=[2;1,1,1,3,3],\qquad \frac{96}{37}=[2;1,1,2,7]$$
hence
$$\frac{13}{5}=[2;1,1,2]$$
just lies between the LHS and the RHS, and it is the rational number with the smallest denominator lying in that interval.
|
We have
$$2.64\gt a=\frac{17575}{5\cdot 36\cdot 37}=\frac{95}{36}\gt \color{red}{\frac{13}{5}}=2.6=\frac{17316}{5\cdot 36\cdot 37}\gt\frac{96}{37}=\frac{17280}{5\cdot 36\cdot 37}=b\gt 2.59.$$
Note that
$$\frac{11}{4}\gt a\gt b\gt\frac{10}{4}$$
$$\frac{8}{3}\gt a\gt b\gt\frac{7}{3}$$
$$\frac{6}{2}\gt a\gt b\gt\frac{5}{2}$$
$$\frac{3}{1}\gt a\gt b\gt\frac{2}{1}$$
Hence, the minimum of $q$ is $5$.
|
https://math.stackexchange.com
|
181,694
|
[
"https://mathoverflow.net/questions/181694",
"https://mathoverflow.net",
"https://mathoverflow.net/users/38889/"
] |
What is known about finite groups $G$ for which there exists a Galois extension $K$ of $\mathbb{Q}$ ramified only at $2$ such that $\text{Gal}(K/\mathbb{Q}) \cong G$ ? More generally, which groups can be realized over $\mathbb{Q}$ with no ramification outside a given (finite) set of primes?
I am thus interested in results of two kinds:
Realization of specific groups.
Examples of groups which are not realizable in this manner, and restrictions on groups which are.
|
Let me first note that there is a slight ambiguity when one says "ramified only at 2". Strictly speaking, that means that the extension is unramified at every place
of $\mathbb Q$ except 2, including infinity. The latter mean that the extension is totally real. Often, however, "ramified only at 2" means "ramified only possibly at 2 and $\infty$", and it is probably what you mean. Here, to remove
ambiguity, for $S$ a finite set of places of $\mathbb Q$, I will use "ramified only at $S$" in the strict sense.
That being said, the short answer is that whatever the finite set $S$, there are strong restrictions on the finite groups $G$ that can appear as the Galois group of an extension of $\mathbb Q$ unramified outside at $S$, but that in general, to "describe" all these restrictions (we may mean different things by that) is in general an open problem. To see what kind of restrictions can appear,
let us consider several situation, from the very particular to the general.
If $S=\emptyset$ or $S=\{\infty\}$, then Minkowski's theorem tell that there is no nontrivial extension unramified outside $S$, so the only possible Galois group $G$ is the trivial one. A very strong restriction indeed.
If $S$ is any set, but you try to determine what <strong>abelian</strong> $G$ may appear, then the answer is given by class field theory. Precisely, if $S=\{\ell_1,\dots,\ell_k,\infty\}$, then the abelian group which appear are the ones that are quotient of $\mathbb Z_{ell_1}^\ast \times \dots \mathbb Z_{\ell_k}^\ast$, and it is obvious that many abelian groups are not of this type (e.g. $\mathbb Z/\ell \mathbb Z$ for $\ell > \ell_1,\dots,\ell_k$). If $S=\{\ell_1,\dots,\ell_k\}$, replace $\mathbb Z_{ell_1}^\ast \times \dots \mathbb Z_{\ell_k}^\ast$ by its quotient by the diagonal subgroup $\{1,-1\}$.
If $S$ is any set, and you're interested in groups $G$ that are $p$-groups (for a $p$ that may or may nor be in $S$), then again, class field theory can help you
because of Frattini's argument saying that a $p$-group is generated by any set that maps subjectively to the maximal abelian $p$-torsion quotient of $G$.
So by the above, the $p$-groups $G$ appearing this way have a system of generators with less elements than the dimension over $\mathbb F_p$ of the maximal $p$-torsion quotient of $\mathbb Z_{ell_1}^\ast \times \dots \mathbb Z_{\ell_k}^\ast$. Is that all? No. We can describe in some cases all the $p$-group $G$ appearing, but not in all. For instance, with $S=\{2,\infty\}$, and $p=2$, the groups $G$ appearing are exactly the $2$-groups having a system of two generators, one of them being of square 1. For $S=\{2\}$, the only $2$-groups appearing are the cyclic groups. In general, the case $p \in S$ is better understood than the case $p \not \in S$. In the latter case, it is a folklore conjecture, on which not much is known, that only finitely many $p$-groups $G$ appearing (one can check readily that the conjecture is true for abelian $p$-groups, by the above paragraph).
If now we consider general finite groups $G$, well, the above shows that there are restrictions, but determining them all is largely open. For instance, a conjecture of Shafarevich states that there is an $n=n(S)$ such that every group $G$ which is the Galois group of an extension unramified outside $S$ has a system of generators with less than $n$ elements. But this is open in every non-trivial case.
Let me also mention a simple but too little known result of Serre, even if it is not strictly speaking part of your question: [Edit: Sorry, I messed up the statement of the result. Here is the right version]. If every finite group is a Galois group over $\mathbb Q$ (if the inverse galois problem has a positive solution) then every finite group is the Galois group of an extension of $\mathbb Q$ unramified at infinity -- that is, a real extension. In other words, according to the inverse Galois problem, there should need no restriction of $G$ in the case where $S$ is the set containing all finite places, but not the place at infinity.
|
Concerning the question of Pablo that follows Joël's answer:
If $k$ is an algebraically closed field, then the situation is completely understood, thanks to work of Grothendieck (in characteristic 0) completed by the proof of Abhyankar's conjecture by Raynaud and Harbater.
Precisely, let $C/k$ be a smooth affine curve, complement of $r\geq1$ points in a projective smooth connected curve of genus $g$, over the alg. closed field $k$.
Let $F=k(C)$ be the field of rational functions on $C$.
Extensions of $C$ that are unramified on $C$ correspond to connected
étale covers of $C$.
If $\mathop{\rm char}(k)=0$, then Grothendieck showed (this is the main result of SGA 1) that a finite group $G$ is the Galois group of a connected étale cover of $C$ if and only if it is generated by $2g+r-1$ elements. More precisely, the Galois group of the maximal extension of $k(C)$ unramified on $C$ is a free profinite group on $2g+r-1$ elements. It is worth observing the fundamental group of a compact connected Riemann surface of genus $g$ deprived from $r\geq1$ points is a free group on $2g+r-1$ generators. (The case $r=0$ is also treated by Grothendieck, $G$ then needs to be generated by $2g$ elements $a_1,b_1,\dots,a_g,b_g$ satisfying the relation $(a_1,b_1)(a_2,b_2)\dots(a_g,b_g)=1$.)
If $\mathop{\rm char}(k)=p>0$, then $C$ has « more » étale covers. For example,
if $C=\mathbf A^1_k$, then $f\colon C\to C$ given by $f(t)=t-t^p$ is a connected étale cover of degree $p$. Raynaud (when $C=\mathbf A^1_k$) and Harbater (in general) showed that a finite group $G$ is the Galois group of a connected étale cover of $C$ if and only if the quotient of $G$ by the (normal) subgroup generated by its $p$-Sylow subgroups is generated by $2g+r-1$ elements.
Specific examples had been exhibited by many mathematicians, such as Abhyankar itself (he has a long series of papers in this style) and Nori (when $G$ is a finite group of Lie type).
|
https://mathoverflow.net
|
5,690
|
[
"https://chemistry.stackexchange.com/questions/5690",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/2018/"
] |
In nucleophilic aromatic substitution reactions, why do fluorides react faster than bromides?
Ordinarily bromide is a better leaving group than fluoride, e.g. in <span class="math-container">$\mathrm{S_N2}$</span> reactions, so why isn't this the case here? The only thing I can think of is that fluorine is more electron-withdrawing (via the inductive effect), which could stabilise the Meisenheimer complex formed as an intermediate.
|
The key point to understanding why fluorides are so reactive in the nucleophilic aromatic substitution (I will call it S<sub>N</sub>Ar in the following) is knowing the rate determining step of the reaction mechanism. The mechanism is as shown in the following picture (Nu = Nucleophile, X = leaving group):
<img src="https://i.stack.imgur.com/uvM8U.png" alt="enter image description here">
Now, the first step (= addition) is very slow as aromaticity is lost and thus the energy barrier is very high. The second step (= elimination of the leaving group) is quite fast as aromaticity is restored. So, since the elimination step is fast compared to the addition step, the actual quality of the leaving group is not very important, because even if you use a very good leaving group (e.g. iodine), which speeds up the elimination step, the overall reaction rate will not increase as the addition step is the bottleneck of the reaction.
Now, what about fluorine? Fluorine is not a good leaving group, but that doesn't matter as I said before. It is not the leaving group ability of <span class="math-container">$\ce{F-}$</span> which makes the reaction go faster than with, say, bromine or chlorine, but its very high negative inductive effect (due to its large electronegativity). This negative inductive effect helps to stabilize the negative charge in the Meisenheimer complex and thus lowers the activation barrier of the (slow) addition step.
Since this step is the bottleneck of the overall reaction, a speedup here, speeds up the whole reaction.
Leaving groups with a negative mesomeric effect (but little negative inductive effect) are not good at stabilizing the Meisenheimer complex, because the negative charge can't be delocalized to them.
|
Fluorine is the most electronegative element, and the fluoride anion is also much smaller and less polarizable than any of the other halogen anions, making its activity much more dependent on solvent effects. In protic solvents, fluoride tends to be very strongly solvated as a hydrogen bond acceptor and is thereby stabilized, resulting in high leaving-group tendency and very low nucleophilicity. The larger halogens can be better leaving groups in aprotic solvents, however, where their bulk and polarizability results in greater comparative stability, and the stabilizing effect of hydrogen bonds is not present. I guess it's conceivable that the presence of fluorine atoms would better stabilize the Meisenheimer complex than other halogens, though I don't know for sure. Still, since all halogens are electron-withdrawing by induction or negative hyperconjugation, the presence of more strongly electron-withdrawing groups (e.g., nitro) conjugated to the $\pi$ system of the ring would still be necessary.
What follows is not a direct answer to your question, but a slight segue into a related subject:
The rate-determining step in SnAr mechanisms is typically the initial nucleophilic attack resulting in formation of the Meisenheimer complex, or occasionally the final proton transfer if neutral nucleophiles are used. The loss of the leaving group is normally a fast step, given the high energy of the Meisenheimer complex intermediate. Once the complex is formed, loss of the leaving group should be rapid. Therefore, I suspect aprotic solvents would be preferable in many situations (particularly with neutral nucleophiles), irrespective of the effect on the leaving group, since they don't hamper the activity of the nucleophile in the way that protic solvents can.
|
https://chemistry.stackexchange.com
|
350,636
|
[
"https://physics.stackexchange.com/questions/350636",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/30522/"
] |
Is there somewhere on the internet I can find cosmological redshift data. In particular, I would like to know the redshift around the time when the acceleration of the Universe began to accelerate.
|
One of the main places where data about galaxies gets aggregated is the <A HREF="https://ned.ipac.caltech.edu/" rel="nofollow noreferrer">NASA Extragalactic Database</A> (NED). For example, <A HREF="https://ned.ipac.caltech.edu/cgi-bin/objsearch?objname=M101&extend=no&hconst=73&omegam=0.27&omegav=0.73&corr_z=1&out_csys=Equatorial&out_equinox=J2000.0&obj_sort=RA+or+Longitude&of=pre_text&zv_breaker=30000.0&list_limit=5&img_stamp=YES" rel="nofollow noreferrer">here's the information page for M101</A> with the default cosmology in their search form. In particularly you want to look at the redshift-independent distances, and the redshift data points. Using the 'Metric Distance' you can calculate the cosmological redshift it would have if it weren't moving (for a given cosmology) by numerically inverting <A HREF="https://arxiv.org/pdf/astro-ph/9905116.pdf" rel="nofollow noreferrer">equation 15 from Hogg's cosmology calculations summary paper</A> (probably have to numerically integrate, too).
Note that the peculiar velocity (velocity relative to Hubble flow) is usually around hundreds of kilometers per second. So, for any redshift greater than about $0.01$ (equivalent to a radial velocity of about $3,000\operatorname{km}\operatorname{s}^{-1}$) is almost certainly entirely dominated by the cosmological redshift of the object. There are a lot of databases replete with redshifts of galaxies that stretch back to round $z=1$ for ordinary galaxies, and much further carefully selected galaxies and active galactic nuclei/quasars. For example: <A HREF="http://skyserver.sdss.org/dr1/en/sdss/data/data.asp" rel="nofollow noreferrer">Sloan Digital Sky Survey (SDSS)</A>, <A HREF="http://deep.ps.uci.edu/" rel="nofollow noreferrer">DEEP2</A>, the <A HREF="http://adsabs.harvard.edu/abs/2012ApJS..200....8K" rel="nofollow noreferrer">AGN and Galaxy Evolution Survey (AGES)</A>, and <A HREF="http://www.gama-survey.org/" rel="nofollow noreferrer">Galaxy and Mass Assembly (GAMA)</A>. This list is nowhere near complete, of course.
|
Redshift of the time when the universe started to accelerate:
From Friedmann's equations:
$$\dot{a}=aH=H_0\sqrt{\Omega_{m0}/a+a^2\Omega_{\Lambda 0}}$$
required is:
$\ddot{a}>0$.
Calculation gets you
$$a=\left(\frac{\Omega_{m0}}{2\Omega_{\Lambda 0}}\right)^{1/3} \approx 0.6$$
$\rightarrow z=1/a-1 = 0.67$
|
https://physics.stackexchange.com
|
1,351,142
|
[
"https://math.stackexchange.com/questions/1351142",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/247415/"
] |
Find a,b,c $\in \mathbb{R}$ for which the function is a) continuous, b) differentiable.
$$f(x)=\left\{\begin{array}{cc}
ax^2+bx+c & x<0 \\
2\sin x+cos x & x\:\ge 0
\end{array}\right.$$
From what I know a function is continuous when the following occurs:
$$\lim_{x\to 0^+\:}f(x) = \lim_{x\to 0^-\:}f(x) = f(0).$$
Calculating the limits would lead me to $c=1$, but what about a and b?
What do I do for proving that it is differentiable? Because from what I know, a function is differentiable when $f'(x_0)$ <em>is a finite number</em>. But I get that $f'(x_0)=2$ which would mean I can have any $a,b,c$?
|
You can use the rational root theorem to guess some roots.
<blockquote>
<strong>Rational root theorem.</strong> All rational roots have the form <span class="math-container">$\frac{p}{q}$</span>,with <span class="math-container">$p$</span> a divisior of the constant term and <span class="math-container">$q$</span> a divisior of the first coefficient.
</blockquote>
For example, in the integer case, one can take <span class="math-container">$q=1$</span>, and one only has to test <span class="math-container">$p = \pm 1$</span>, <span class="math-container">$p = \pm 2$</span>, <span class="math-container">$p= \pm 5$</span>, and then you have it. (you of course could test <span class="math-container">$\pm 7$</span>, <span class="math-container">$\pm 10$</span>, <span class="math-container">$\pm 14$</span>, <span class="math-container">$\pm 35$</span> and <span class="math-container">$\pm 70$</span> as well, but it would be less work to just divide by the <span class="math-container">$x-5$</span>)
Another technique:
<blockquote>
<strong>Descartes rule of Signs.</strong> The number of positive roots of <span class="math-container">$P(x)$</span> is equal to the number of sign changes is the sequence formed by the coefficients of <span class="math-container">$P(x)$</span> or it is less by an even number.
The number of negative roots of <span class="math-container">$P(x)$</span> is equal to the number of sign changes is the sequence formed by the coefficients of <span class="math-container">$P(-x)$</span> or it is less by an even number.
Roots with multiplicity <span class="math-container">$n$</span> are counted <span class="math-container">$n$</span> times.
</blockquote>
In this case it is useless. But if it is a polynomial like <span class="math-container">$x^3+3x^2+4x+2$</span>, you now know that there are no positive roots.
Last technique:
<blockquote>
<strong>Estimating roots.</strong> If you've tested all eligible roots that are smaller than, say, 10, then you can use this. If the first two coefficients are not extremely small compared to the last two, then its a good idea to look to you eligible roots near to <span class="math-container">$$-\frac{\mathrm{second} \; \mathrm{coefficient}}{\mathrm{first} \; \mathrm{coefficient}}$$</span>
</blockquote>
If the root is larger than 10, the <span class="math-container">$x^3$</span> and <span class="math-container">$x^2$</span> term are large enough to be able to ignore the smaller terms, thus this is a good estimation. Write <span class="math-container">$a$</span> for the first coefficient and <span class="math-container">$b$</span> for the second, <span class="math-container">$c$</span> for the third and <span class="math-container">$d$</span> for the fourth.
Now, if <span class="math-container">$x$</span> is large, then <span class="math-container">$ax^3+bx^2+cx+d \sim ax^3+bx^2$</span>. Therefore the roots will be comparable. But <span class="math-container">$ax^3+bx^2=0$</span> gives <span class="math-container">$x^2=0$</span> or <span class="math-container">$ax+b=0$</span>. In the first case <span class="math-container">$x$</span> isn't large enough. In the second case we have <span class="math-container">$ax+b=0$</span>, thus <span class="math-container">$ax=-b$</span>, thus <span class="math-container">$x=\frac{-b}{a}=-\frac{\mathrm{second} \; \mathrm{coefficient}}{\mathrm{first} \; \mathrm{coefficient}}$</span>.
|
You could work by locating the roots approximately by computing at some easy values and noting sign changes - Sturm's Theorem is a heavy duty resource, and Descartes Rule of Signs can be indicative. There are more basic observations too, which can help to narrow the search amongst rational roots - using changes in the sign of the value and the intermediate value theorem.
In this example the polynomial is positive at $x=0, x=-1$ but negative at $x=1$ - so that's located a small positive root. And the polynomial is positive for large $x$ and negative for small (large negative) $x$. Choose $x=\pm10$ as easy to calculate - the value is positive for $x=10$ and negative for $x=-10$.
So if there is an integer root (noting that we've located all three roots approximately) it is greater than $1$ and less than $10$, or less than $-1$ and greater than $-10$ and is a factor of $70$. And every calculation we've done so far has been very easy.
The calculations for $\pm n$ can be done together, because the terms have the same magnitude, and with the options being $2,5,7$ you do $\pm 5$ first because that tells you whether to do $2$ or $7$ if $5$ is a miss.
|
https://math.stackexchange.com
|
404,543
|
[
"https://softwareengineering.stackexchange.com/questions/404543",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/356397/"
] |
I have a code that is something like this in a class
<pre><code>string method x (){
foreach(a in alist){
//do something
}
return string;
}
integer method y (){
foreach(a in alist){
//do something
}
return integer;
}
double method z (){
foreach(a in alist){
//do something
}
return double;
}
</code></pre>
I sense a code smell here in the multiple for loops on the same object. But I am not sure whether it is real or not. Is there any way I can refactor this code? so that I only have one place for the for loop?
|
<h2>It's not inherently a code smell</h2>
There is nothing wrong with having a <code>foreach</code> in multiple methods, <em>unless</em> you always run these three methods consecutively, at which point you can simplify it to:
<pre><code>public void xyz()
{
foreach(a in alist)
{
x(a);
y(a);
z(a);
}
}
</code></pre>
But if you do not always call these methods in the same succession (which I'm suspecting is the case), then this does not apply.
<blockquote>
I feel like going through the same object different methods is a code smell
</blockquote>
If you were calculating a value based on the same input parameters, I'd agree. However, this is not the case here. Each method requires its own enumerator to do its own enumeration.
<blockquote>
What if change the name of the alist variable in the future? I would have to make change at least 3 places.
</blockquote>
This would apply to every variable, method, class name, or namespace you would ever use. It would literally render you unable to write <em>any</em> code that wouldn't allegedly smell.
With the right IDE (or extension), this isn't a problem, since you can refactor names, which will change all references to that name across the codebase.
In the case of Visual Studio, that's done by pressing F2 while your cursor is on the name (variable/method/class/...).
<blockquote>
Is there any way I can extract the foreach to a common place and get the same desired output?
</blockquote>
Not meaningfully so. I mean, you could abstract it, but it would add complexity without actually improving performance or maintainability. It would detract from readability, not just because of the complexity increase, but also because you change a well known concept to something homebrewed, which requires additional knowledge for a developer to follow when they read the code.
Several drawbacks, no benefits. No reason to do this.
<hr>
The only reasonable case for a code smell would not be for the enumeration itself, but any preliminary filtering logic. For example:
<pre><code>public string x ()
{
foreach(a in alist.Where(a => a.Foo == "Bar"))
{
//do something
}
return string;
}
// And the same Where() for y() and z()
</code></pre>
Here, the filtering logic itself can be abstracted to avoid needless repetition:
<pre><code>private IEnumerable<A> aBarList => alist.Where(a => a.Foo == "Bar");
public string x ()
{
foreach(a in aBarList)
{
//do something
}
return string;
}
// And the same Where() for y() and z()
</code></pre>
This is just one of many possible implementations. Your vague example code makes it impossible to judge the best implementation for your current case.
|
It's not a code smell. In fact Martin Fowler over at <code>[refactoring.com][1]</code> argues you should repeat a loop if it adds value because of the trivial computing cost of repeating said loop.
You should make alist a function argument and make the function static (if possible in the real world)
|
https://softwareengineering.stackexchange.com
|
39,257
|
[
"https://biology.stackexchange.com/questions/39257",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/19159/"
] |
I am learning about frameshift mutations. Frameshifts can occur due to a nucleotide deletion. Suppose that due to a frameshift, because of a deletion somewhere upstream from the original start codon, two additional start codons are generated, just before the stop codon in the new reading frame. What would happen in terms of translation?
AUG-GCC-AUA-AUG--------UAA
Start Start then stop
|
There is a basic misconception in the question you have asked, which @biogirl has explained. <strong>There is only one start Codon in any mRNA</strong> and it defines the <strong>open reading frame.</strong>
All other AUGs in the open reading frame are simply codons that encode for the Amino Acid Methionine and have no function in the start of translation. There are factors other than AUG that determine the start of translation.
So a frame shift that gives you an additional AUG only means that you will have a different Amino Acid encoded for in the resulting polypeptide. A frame shift will generally completely alter the protein product of the gene. If however the frameshift does disrupt the start codon, then it is unlikely that you will have any translation what-so-ever, as the other elements necessary for determining the start of translation will likely not be present in other areas of the coding sequence. In prokaryotes, you need a Shine-Delgarno sequence to initiate translation, and in Eukaryotes, though all of the factors for translation start are not well understood many genes carry a Kozak sequence that indicates to the ribosome the start of the open reading frame.
<strong>The more important codons to look for are introductions of stop codons.</strong> These three codons, <strong>UAA, UAG, and UGA</strong> do not have tRNAs with complementary anticodons (for the most part, as tRNA genes can also sustain mutations that change their anticodon) and therefore all result in the termination of translation if the shifted frame results in the ribosome reading one of the three stop codons in frame.
|
AUG functions as a start codon only when it is at the 1st position of the open reading frame. Whenever AUG is present in between, it codes for methionine amino acid. Go through the basics of translation from a good book.
|
https://biology.stackexchange.com
|
431,317
|
[
"https://electronics.stackexchange.com/questions/431317",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/217779/"
] |
I'm trying to understand the circuitry of TV signal splitters and associated boosters/amplifiers, and I have two questions.
My situation is as follows. I have a TV aerial (antenna) in my loft. It connects to a non-powered box, which I assume must be a passive splitter. From there cables run to sockets in a total of 6 rooms. However, only in two or three of those rooms does a TV connected to the socket show any signal; and even in those rooms, there is no signal unless a booster, located in the room nearest to the aerial, is connected and powered up.
That all sounds sensible, but then I stop being able to understand.
<ol>
<li>Here's the odd thing, and the first question. To make any TV on the system work, the booster has to be connected to the aerial socket, but nothing needs to be connected to the booster output; so long as the booster is connected and switched on, the TV in a neighbouring room will work. It's as though it is somehow sending the amplified signal back up its input cable. But I've looked at circuit diagrams for boosters, and that doesn't look possible. Can anyone explain what is going on?</li>
<li>I am trying to find out what is wrong in the rooms where no signal is ever reported. I read somewhere that if I look across the terminals of the TV socket with an ohmmeter, I should see effectively zero resistance, since there is continuity through the aerial. However, this is not true for any of my sockets. With all devices disconnected, if I look at the resistance at the socket the booster normally connects to, I see about 4k. If I look across any of the other sockets (including those where a signal is successfully received), I see no continuity at all. So I suppose that the passive splitter must have a capacitor or transformer somewhere in its circuitry, but I can't find a circuit diagram anywhere that would show whether this is true. Can anyone say whether this is the case, i.e. whether I should be able to see continuity when looking into a socket?</li>
</ol>
Background information:
<ul>
<li>Until recently we have never tried to use TVs in the rooms where we now find they don't work, so this is probably not a new problem. </li>
<li>In particular, the non-functioning sockets have not been used since the analogue era.</li>
<li>The wiring is probably at least 30 years old, and certainly pre-digital. </li>
<li>I'm in the UK, and the TV signal is digital terrestrial. </li>
<li>Fitting an outdoor aerial to get a better original signal is not an option in our neighbourhood. </li>
<li>The passive splitter and aerial connections are all screw-downs, and they are not conveniently located, so swapping cables around for test purposes is slow and painful.</li>
</ul>
|
You've made some wrong assumptions about what the parts of the system are. The part you're describing as a non-powered passive splitter is actually a powered active splitter. Splitting one aerial signal into six with a passive splitter is unlikely to give you sufficient signal on any of the six outputs, especially if the aerial is in the loft.
The part you're describing as the booster is just the power supply to the active splitter. It sends a DC voltage up the cable running to the active splitter, and filters it out of the cable running to the TV. That's why it has to be powered up for any of the TVs to work.
The rooms where TVs don't work are either down to a faulty output from the splitter or a defective cable. To do a basic test on each cable, disconnect it from the splitter and check that it's open circuit, then short one end together and check that it now shows a short circuit at the other end.
|
It is desirable to have the amplifier be as close to the aerial as possible and certainly prior to the signal being split, however getting mains power to said locations is often problematic.
The solution to this is amplifiers that are powered via one of the output coax lines. The amplifier is sited close to the aerial while a power injection unit is sited close to the TV, These are often sold as "masthead" amplifiers. I believe this is the setup you have.
I don't think you can read much into whether or not there is DC continuity on the output of an amplifier, whether you see it or not depends entirely on the details of the amplifiers internal circuitry.
I would start with end to end continuity and short-circuit tests on the cable runs (note: some sockets have isolation capacitors, so you may need to test from the terminals on the back of the sockets rather than the connections on the face). If you find any opens or shorts then you obviously need to fix them.
Failing that it may be worth swapping around connections to see if the problem follows the cable or follows the connection on the amplifier, but honestly given the age of the amplifier and the difficulty to access i'd be more inclined towards replacing the amplifier at that point.
|
https://electronics.stackexchange.com
|
26,154
|
[
"https://ai.stackexchange.com/questions/26154",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/44278/"
] |
I'm trying to implement Deep Q-Learning for a pet problem having a continuous state space and discretized action space.
The algorithm for table-based Q-Learning updates a single entry of the Q table - i.e. a single <span class="math-container">$Q(s, a)$</span>. However, a neural network outputs an entire row of the table - i.e. the Q-values for every possible action for a given state. So, what should the target output vector be for the network?
I've been trying to get it to work with something like the following:
<pre><code>q_values = model(state)
action = argmax(q_values)
next_state = env.step(state, action)
next_q_values = model(next_state)
max_next_q = max(next_q_values)
target_q_values = q_values
target_q_values[action] = reward(next_state) + gamma * max_next_q
</code></pre>
The result is that my model tends to converge on some set of fixed values for every possible action - in other words, I get the same Q-values no matter what the input state is. (My guess is that this is because, since only 1 Q-value is updated, the training is teaching my model that most of its output is already fine.)
What should I be using for the target output vector for training? Should I calculate the target Q value for every action, instead of just one?
|
As you say, the output of a <span class="math-container">$Q$</span> network is typically a value for all actions of the given state. Let us call this output <span class="math-container">$\mathbf{x} \in \mathbb{R}^{|\mathcal{A}|}$</span>. To train your network using the squared bellman error you need first calculate the scalar target <span class="math-container">$y = r(s, a) + \max_a Q(s', a)$</span>. Then, to train the network we take a vector <span class="math-container">$\mathbf{x'} = \mathbf{x}$</span> and change the <span class="math-container">$a$</span>th element of it to be equal to <span class="math-container">$y$</span>, where <span class="math-container">$a$</span> is the action you took in state <span class="math-container">$s$</span>; call this modified vector <span class="math-container">$\mathbf{x'}_a$</span>. We calculate the loss <span class="math-container">$\mathcal{L}(\mathbf{x}, \mathbf{x'}_a)$</span> and back propagate through this to update the parameters of our network.
Note that when we use <span class="math-container">$Q$</span> to calculate <span class="math-container">$y$</span> we typically use some form of target network; this can be a copy of <span class="math-container">$Q$</span> where the parameters are only updated every <span class="math-container">$i$</span>th update or a network whose weights are updated using a polyak average with the main networks weights after every update.
Judging by your code it looks as though your action selection is what might be causing you some problems. As far as I can tell you're always acting greedily with respect to your <span class="math-container">$Q$</span>-function. You should be looking to act <span class="math-container">$\epsilon$</span>-greedily, i.e. with probability <span class="math-container">$\epsilon$</span> take a random action and act greedily otherwise. Typically you start with <span class="math-container">$\epsilon=1$</span> and decay it each time a random action is taken down to some small value such as 0.05.
|
There are a couple ways you can define the architecture of a DQN. The most common way of doing it is by taking in the states and outputting the value function of all possible actions - this leads to a DQN with multiple outputs. The other, less efficient way, includes taking in an state-action as input and outputting a single real value - this approach is typically avoided since we need to run the model multiple times to get estimates for different actions.
The replay buffer is used to store <span class="math-container">$(S,A,R,S')$</span> transitions as encountered using your <span class="math-container">$\epsilon$</span>-soft policy. We sample one of these transitions from the replay buffer and calculate an estimate of the value function for <span class="math-container">$(S,A)$</span> i.e <span class="math-container">$\hat Q(S,A,\theta)$</span> and then we calculate a target as follows. <span class="math-container">$$target =R+\max_\limits{a'}\hat Q(S',a',\theta^-)$$</span>
Assuming you use the first model, you can then use a Squared error loss function, defined as follows, and modify your parameter as a function of that
<span class="math-container">$$L(\theta) = (target-\hat Q(S,A,\theta))^2$$</span>
Assuming for now the target is fixed (I'll explain this in a minute), only <span class="math-container">$Q(S,A,\theta)$</span> is a function of <span class="math-container">$\theta$</span> in the loss function. <span class="math-container">$Q(S,A,\theta)$</span> corresponds to one output node of your DQN and therefore, as you've already highlighted, when carrying out EBP the parameters are updated such that we make the value of this one node tend to the specified target.
This is just how Q-learning works, we use samples generated by the behaviour policy to create <span class="math-container">$L(\theta)$</span> and then tweak the parameters to minimise the cost. As we do this for more and more samples the network hopefully figures out a way that accommodates for every sample it's been trained on so far (with more emphasis on the most recent samples).
<strong>As to your issue, are you sure you're training on multiple different samples and not just a specific one? it may just be a bug you've overseen.</strong>
<hr />
<strong>Explaining <span class="math-container">$\theta^-$</span></strong>
I used a slightly different notation, <span class="math-container">$\theta^-$</span>, for the parameters used to generate the bootstrapped estimate, <span class="math-container">$\max_\limits{a'}\hat Q(S',a,\theta^-)$</span>. <span class="math-container">$\theta^-$</span> is only matched to <span class="math-container">$\theta$</span> every <span class="math-container">$n^{th}$</span> step because we want to keep the target constant as much as possible. The reason for this is because Q-learning does not necessarily converge when using neural networks partly due to bootstrapping which can cause a divergence of optimisation because of state generalisation. By using this <span class="math-container">$\theta^-$</span> we help prevent things like this from happening.
Ultimately the idea of the replay buffer and the fixed parameter for bootstrapping are to try to convert the RL problem into a supervised learning problem because we know much more about how to deal with supervised learning problems when using DNNs.
|
https://ai.stackexchange.com
|
511,960
|
[
"https://electronics.stackexchange.com/questions/511960",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/258342/"
] |
In general, if we are working on a sequential circuit, say a Flip Flop (e.g. D Flip Flop)
The code we write for the always block part is:
<pre><code> always @(posedge clk or posedge reset)
begin
if (reset) begin
// Asynchronous reset when reset goes high
q <= 1'b0;
end else begin
// Assign D to Q on positive clock edge
q <= d;
end
end
</code></pre>
I am confused on the point - Why the line <code>if(clk)</code> is not used/written/introduced before <code>q <= d</code> in our always block.
Motivation:
Posedge transition corresponds to transition from:
<ul>
<li>0 to 1</li>
<li>x to 1</li>
<li>z to 1</li>
<li>0 to x</li>
<li>0 to z</li>
</ul>
So, why in most of the sequential codes, we don't confirm that the positive edge of the clock has appeared after the edge transition from low to high.
I've searched the forum for this topic but can't find a specific answer on this. I am a newbie and will appreciate your guidance.
|
You have a valid point. If we were being very careful we would want to know if the <code>clock</code> or <code>reset</code> was actually in the <code>X</code> state, and we would probably set <code>Q</code> to <code>X</code> if that was the case.
So why don't we do those checks? The <code>clock</code> and <code>reset</code> are signals that we design very carefully to ensure that they are solid digital signals, with fast transitions from 0 to 1. So, it is often safe to assume that they are never <code>X</code> for a significant length of time.
If you do want to be a careful designer, it is usually better to check for unknown values of <code>clock</code> and <code>reset</code> at their point of origin rather than everywhere they are used. Adding assertions for these signals in just one part of the design allows the simulations to be much for efficient than adding complex if/then/else checks in millions of flip flops.
|
It's implied that if the block triggered, and reset is <strong>not</strong> high, that clock rising edge must have triggered the always block (because the always block triggered either because posedge reset <strong>or</strong> posedge clk). Basically if reset is high, you want to behave like a reset no matter what in the always block, otherwise you want to behave like a flip flop.
|
https://electronics.stackexchange.com
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 6