qid
int64 1
4.65M
| metadata
listlengths 3
3
| prompt
stringlengths 31
25.8k
| chosen
stringlengths 17
28.2k
| rejected
stringlengths 19
40.5k
| domain
stringclasses 28
values |
|---|---|---|---|---|---|
49,709
|
[
"https://mathoverflow.net/questions/49709",
"https://mathoverflow.net",
"https://mathoverflow.net/users/10909/"
] |
I'm assuming someone must have scooped me on this simple argument. Where does it (first) appear in the literature?
Fix an ultrafilter $\mu$ on $\omega$, the natural numbers.
Alice and Bob play a nim-like game. At the start each player "holds" the empty set and the
the starting "position" consists of $\omega$. Beginning with Alice, each player in turn will remove a non-empty finite initial segment from the current position (leaving some final segment of $\omega$) and deposit the removed segment into his or her holdings. Play proceeds for $\omega$ rounds.
Now the object of the game: to finish with holdings that belong to the ultrafilter $\mu$.
Strategy stealing obviates the possibility of either player possessing a winning strategy; the existence of $\mu$ thus contradicts AD. In more detail, if either player has a winning strategy, the game must admit infinitely many winning positions, but that would allow the other player to possibility of moving to a winning position on his or her very first move.
<hr>
Many sources work much harder than this to prove the weaker results that AC contradicts
AD. (Afterthought: I'd love to see a big-list question collecting theorems where unnecessarily complicated proofs permeate the literature despite the availability of simpler treatments...MO appropriate?)
|
Or just take all powers of $3$ and add to them all numbers that are congruent to $1$ modulo $3$.
|
This is false. We construct $A$ inductively, so that the following holds:
<ul>
<li>$A$ contains all powers of two larger or equal than $4$ and no other even numbers.</li>
<li>The number of odd numbers in $A$ between $2^j$ and $2^{j+1}$ is $2^{j-2}$.</li>
<li>No power of two is in a 3-AP contained in $A$.</li>
</ul>
We start by specifying that $4\in A, 5\in A, 6\notin A,7\notin A$.
Suppose $A\cap\{1,\ldots, 2^m-1\}$ has been defined so that the above properties hold. We next define $A\cap\{ 2^m,\ldots, 2^{m+1}-1\}$ as follows: $2^m\in A$. There are $1+2+\ldots+2^{m-3}<2^{m-2}$ odd numbers smaller than $2^m$ in $A$; let $O_m$ be the set of all of them. We choose $2^{m-2}$ odd numbers in
$$
\{2^m,\ldots, 2^{m+1}\} \backslash (2^{m+1}-O_m).
$$
and add them to $A$. We can do this since $|O_m|< 2^{m-2}$.
The first two properties are clear from the construction. To check the last (the one we care about), note that $2^m$ can't be the first/last term of a $3$-AP in $A$, since then the last/first term would also be even, hence another power of $2$, and then the middle one would be even, and a power of $2$ as well. But $2^m$ can't be the middle term of a $3$-AP either: for the same reason as before, the other two terms must be odd. Let $(a,2^m,c)$ be the AP. Then $a\in O_m$ by definition, but this implies $c-2^m=2^m-a$, or $c\in 2^{m+1}-O_m$, a case which was excluded in the construction.
Clearly $A$ has density $1/4$ so this completes the proof.
<hr>
If $A$ has positive upper density, one can still ask what is the largest possible size of the set $B$ of all elements of $A$ which are not in any $3$-AP contained in $A$. Clearly $B$ has density $0$ by Roth's Theorem (and we get better bounds from the quantitative bounds in Roth's Theorem). Is it possible to do better?
|
https://mathoverflow.net
|
3,045,344
|
[
"https://math.stackexchange.com/questions/3045344",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/569640/"
] |
I have a book about group theory and there was the following question:
<blockquote>
Let <span class="math-container">$G$</span> be a set of all the real matrices in the following form: <span class="math-container">$\begin{pmatrix}a & b\\
-b & a
\end{pmatrix}$</span> when <span class="math-container">$a^2+b^2>0$</span>.
<ol>
<li>Prove that <span class="math-container">$G$</span> is a group.</li>
<li>Prove that <span class="math-container">$G\cong (C^\times,\cdot )$</span>. </li>
</ol>
</blockquote>
I successfully proved that <span class="math-container">$G$</span> is a group. Now I'm trying to prove the second sub-question. In the book they suggested to declare the following function:
<span class="math-container">$$ f:\begin{pmatrix}a & b\\
-b & a
\end{pmatrix} \to a+ib$$</span>
Also they wrote "obviously <span class="math-container">$f$</span> is bijection", and then they proved the Homomorphism equation. The only part that I didn't understand is why <span class="math-container">$f$</span> is a bijection, and why it is so obvious? How can I prove it formally?
|
The only potential problem with choosing <span class="math-container">$E$</span> so <span class="math-container">$0 < \mu(E) < \infty$</span> is that there are positive measures <span class="math-container">$\mu$</span> with sets <span class="math-container">$E$</span> such that <span class="math-container">$\mu(E) = \infty$</span> and no measurable subset of <span class="math-container">$E$</span> has nonzero finite measure. But that can't happen here, because <span class="math-container">$f \in L^p$</span> and <span class="math-container">$f \ne 0$</span> on <span class="math-container">$E$</span>. Consider sets of the form <span class="math-container">$\{x: f(x) > \epsilon\}$</span> (or <span class="math-container">$< - \epsilon$</span>).
|
Here's one problem: Having <span class="math-container">$f\ne 0$</span> on <span class="math-container">$E$</span> doesn't imply that
<span class="math-container">$$
\int_E f\, d\mu \ne 0.
$$</span>
For instance, the positive and negative part of <span class="math-container">$f$</span> could cancel each other on <span class="math-container">$E$</span>.
I suggest that you use the fact that for <span class="math-container">$X=L^p$</span> you have <span class="math-container">$X^*\simeq L^q$</span>.
|
https://math.stackexchange.com
|
594,479
|
[
"https://physics.stackexchange.com/questions/594479",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/257503/"
] |
For an Intro. Thermal Physics course i am taking this year, I had a simple problem which threw me off-guard, I would appreciate some input to see where i am lacking. The problem is as follows:
Does the entropy of the substance decrease on cooling? If so, does the <strong>total</strong>
entropy decrease in such a process? Explain.
<strong>Here is how i started this:</strong>
->Firstly, for a body of mass <em>m</em> and specific heat, <em>c</em>(assuming it is constant) the heat absorbed by the body for an infinitesimal temperature change is <span class="math-container">$dQ=mcdT$</span>.
->Now if we raise the temperature of the body from <span class="math-container">$T_1$</span> to <span class="math-container">$T_2$</span>, the entropy change associated with this change in the system is <span class="math-container">$\int_{T_1}^{T_2}mc\frac{dT}{T}=mcln\frac{T_1}{T_2}$</span>. This means the entropy of my system has increased. Up to this was fine.
<strong>I face difficulty in the folowing:</strong>
<*><em>Is this process, the act of heating this solid, a reversible or an irreversible one?</em> Now, I know that entropy is a state variable, so even if it was irreversible, so to calculate the entropy change for the system during this process we must find a reversible process connecting the same initial and final
states and calculate the system entropy change. We can do so if we imagine that we have at our disposal a
heat reservoir of large heat capacity whose temperature T is at our control.
We first adjust the reservoir temperature to <span class="math-container">$T_1$</span> and put the object in contact with the reservoir. We then slowly (reversibly) raise the reservoir temperature from <span class="math-container">$T_1 to T_2$</span>. The body gains entropy in this process, the amount i have calculated above.
According to the main problem, if i were to reverse this process and slowly lower the temperature of the body from <span class="math-container">$T_2$</span> to <span class="math-container">$T_1$</span> wouldn't the opposite were to happen? i.e. the body loses entropy to the reservoir, the same amount as calculated above, but different signs?
<*> From above discussion, can i say that the net entropy of the system+surroundings is zero? Had it been a reversible process then from the second law i know it would've been zero, even if it is irreversible, as long as i connect the same two states with a reversible path, the net still comes out to be zero.
Am i right to think of it as such? I had this problem of discerning which is reversible/irreversible for a while.
|
Simple answer yes,
Think about taking two extreme cases :
How much does a slinky extend in a gravity-free space? None at all
How much would it extend if it was on perhaps Jupiter or even a black hole ?It should extend by a large amount.
Gravity does play a role.
|
If a slinky is hanging vertically in a gravitational field, the amount of stretch in any short section depends on the weight of the coil hanging below that section. Less gravity will produce less stretch.
|
https://physics.stackexchange.com
|
652,455
|
[
"https://electronics.stackexchange.com/questions/652455",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/248767/"
] |
I was thinking about diamonds, and how they're excellent thermal conductors and yet at the same time very good electrical insulators.
Does the opposite of diamond exist, i.e. are there commonly available, inert (as in, safe to use/handle) conductors with poor thermal conductivity?
|
<blockquote>
<em>Does infinite magnetic permeability (e.g. for an ideal transformer)
violate conservation of energy?</em>
</blockquote>
Infinite magnetic permeability produces infinite inductance (even for a single turn coil) and, the rate of change of current that can be produced through an infinite inductance when applying a <strong>finite</strong> voltage is zero.
Hence, no current can flow in and, no energy can be inputted and, no violation of the conservation of energy.
<blockquote>
<em>Creating infinite magnetic field lines from a finite applied field</em>
</blockquote>
It can't be done without applying infinite voltage for an infinite amount of time.
<sub>Magnetic permeability is a material constant (just like electrical resistivity); it doesn't imply that any source of current is applied that might create a H-field just as electrical resistivity doesn't imply a flow of current due to an applied voltage.</sub>
|
The piece would "short out" the H field. The energy in a unit volume of a magnetic core is B times H, and -- of necessity -- the H field would be zero. So the energy stored in the core would be zero.
I'm not sure how this would work out with a solenoid core (i.e., an open core). For a toroidal core a simple coil would have infinite inductance (which is expected).
|
https://electronics.stackexchange.com
|
131,470
|
[
"https://security.stackexchange.com/questions/131470",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/119006/"
] |
I recently had a call from whom I thought was my broadband company. As the call went on, I realised that it was someone who was trying to hack my bank details. They asked for a card reader, as they wanted to refund me. An amount of money which was not true, but cut the story short. They convinced me that I needed to load some software onto my computer. Now I am worried that my child is not safe on her computer. Could you please send me some advice what I should do?
|
I think your basically on the right track. Your client needs to provide some
token which your server will recognise or accept as proof that the client is
legitimate. How to do this depends on a number of factors and what the risks are
you are trying to protect against. There is no one single solution.
Some of the things to consider may include
<ul>
<li>Complexity. This is possibly the biggest threat to both security and
reliability. The more complex the solution is, the more difficult it is to
prove correct. This can mean both an increase in security flaws as well as
just normal bugs and will result in higher maintenance costs. As the saying
goes "everything should be as simple as possible, but no simpler".</li>
<li>Level of assurance. What is it you need to be assured about. Do you just need
to know the client will behave correctly i.e. make requests which the server
can understand and respond to rather than just consume resources and possibly
degrade service or do you actually need to know it is a specific approved
client or perhaps an approved user or maybe even coming from an approved
source (IP address). If you need high levels of assurance, what do you need to
do to prevent clients from lying - trying to trick your server by providing
fake credentials and what do you need to do to prevent theft of credentials
(from sniffing network packets, MitM (man in the middle) attacks etc. </li>
<li>The robustness of your server. Your server should be liberal in what it will
accept and conservative in what it sends. This basically means your server
should handle input in such a way that it will recover from bad input
(incorrect format, corrupted data, etc) and not simple crash or exhibit
unexpected behaviour (such as dumping out sensitive data tot he client) when
provided with unexpected input and will be conservative in the responses it
sends (i.e. predictable and consistent). </li>
<li>The inherent value. What is the inherent value to you, your client and
others. This will help determine what level of protection you need to
consider. The likelihood of attack depends to some degree on the value (or
perceived value) of the service or data to an attacker. This can be difficult
to assess as you cannot always identify the motives an attacker may have. In
some cases, it is easy, such as when involving assets with a monetary
value. Other times, less so, such as when it involves 'bragging rights',
revenge or personal grudges or possibly some misguided belief or
understanding.</li>
<li>Understand the architecture and underlying technology. It is important to have
a good basic understanding of TCP/IP in order to identify the most appropriate
controls. For example, the difference between basic protocols (udp v tcp), the
basics of how the connection is made as this will determine which controls are
most appropriate - for example, understanding at what point during the
connection you can make a decision or even whether a certain decision is even
meaningful and what level of trust you can put in the information your being
given. For example, it is easy to spoof the source address of a UDP packet
because there is no two way connection handshake, but harder to do so with a
TCP socket because there is. </li>
<li>Avoid 'rolling your own'. Don't try to invent your own security solution. Use
an established and tried technique. In this respect, your question indicates
your heading in the right direction. </li>
</ul>
Consider some basic use cases such as a standard web site (essentially a server
with clients who connect to a specific port) compared to a pay-per-use service,
such as Audible. In the first case, most of the time, the server is less
interested in the specific individual or client. What the server wants to know
is that the client understands the protocol and sends commands the server
understands. In the second case, the server needs to know the client is
legitimate, the user is a know paid up subscriber and possibly that the client
is an approved client (in the case of Audible, the approved clients are able to
verify encryption keys so that the client can decrypt the audible book which has
DRM protection). In these two cases, the requirements are vary different and
will be addressed in different ways. In the first case, you really just need to
know the client understands your protocol and will likely behave in an
acceptable manner. in the second case, you need to protect against forgeries and
will likely need more complex functionality, such as encrypted communications,
shared encryption keys, login credentials etc.
In your case, where your experimenting and learning, you want to start of
simple. Defining a simple protocol which requires the client to send a known
'fact', such as a string or key is probably fine. However, it also depends on
the environment your working in. If your just experimenting on systems within a
LAN which has reasonable firewalls between you and the Internet, then you
probably only need to worry about systems on your LAN. You should ensure your
using non-privileged ports (i.e. above 1024) and avoid using ports which already
have a well defined or common use and don't leave your server operational when
not actively using it. If on the other hand your operating in a more accessible
environment, such as using a cloud based platform, then you may need to be a
little more defensive - log what is connecting, perhaps limit connections to the
IP addresses of your test clients etc. Note that advice not to roll your own
security solution can be ignored when you are just experimenting or learning. In
some cases, trying to first solve the problem yourself is not a bad way of
learning and can help in later understanding of why established solutions are
designed they way they are - just don't try to do it for a real application.
There are many books which describe how to secure server applications. Key
topics to cover would be things like network and application firewalls and
TCP/IP. common encryption and hashing techniques and the applicaitons/protocols
which use them (HTTPS, SSL/TLS, SSH, PGP/GPG). Have a look at existing
protocols. The HTTP/HTTPS protocol is a good starting point as it is relatively
simple. Don't get overwhelmed by the complexities, especially with respect to
things like encryption and hashing. These are vary complex topics, but you
really just need to understand the principals and how to apply them rather than
the vary technical detail. this is the main reason you should use a known and
tested approach rather than try to invent your own.
You may also find some of the following useful
<pre><code>- https://www.owasp.org
- https://www.feistyduck.com/books/bulletproof-ssl-and-tls/bulletproof-ssl-and-tls-introduction.pdf
- https://letsencrypt.org/
</code></pre>
The OWASP site is particularly useful as it has lots of great information on
common security flaws/mistakes developers make and how to both detect and
prevent them. Much of this information can be generalised to any server/client
environment.
|
Most services do not validate the application being used as it is extremely hard to get a relatively reliable answer. Therefore what they can validate is the user through authentication and possibly their location through the IP address.
That said, some servers do validate the application, but because there is <strong>always</strong> a way to reverse engineer and bypass the verification, they have to constantly update it. An example of those would be certain poker applications that use multiple levels of obfuscation and regularly (almost daily) do updates that change the security codes and their location in the program to make it very hard to find before the next update.
Because of the effort needed and the remaining impossibility to be 100% sure, I advise you against doing that.
In any case, you should do validations on the server based on the data that is received to determine if it appears legitimate or not, <strong>never</strong> relying on what seems to be happening on the user's machine.
|
https://security.stackexchange.com
|
509,039
|
[
"https://physics.stackexchange.com/questions/509039",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/134777/"
] |
Coulomb's law states that if we have two charges <span class="math-container">$q_{1}$</span> and <span class="math-container">$q_{2}$</span>, then <span class="math-container">$q_{1}$</span> will act on <span class="math-container">$q_{2}$</span> with a force <span class="math-container">$$ \textbf{f}_{12}=\frac{q_{1}q_{2}}{r_{12}^2} { \hat {\textbf {r}}_{12}},$$</span>
and <span class="math-container">$q_{2}$</span> will similarly act on <span class="math-container">$q_{1}$</span> with a force <span class="math-container">$\textbf{f}_{21}$</span> such that
<span class="math-container">$$\,\textbf{f}_{21}=-\textbf{f}_{12}.$$</span>
Suppose the only things we knew was that the repulsive forces vary like <span class="math-container">$r^{-2}$</span>, and that they depend on the magnitude of the charges involved. Can we infer from these two observations alone that <span class="math-container">$\textbf{f}_{21}=-\textbf{f}_{12}$</span>? Or would we need further experiments to establish this equation?
The collinearity can be deduced from symmetrical considerations. What about the magnitude?
|
<blockquote>
Suppose the only things we knew was that the repulsive forces vary like <span class="math-container">$r^{-2}$</span>, and that they depend on the magnitude of the charges involved. Can we infer from these two observations alone that <span class="math-container">$\textbf{f}_{21}=-\textbf{f}_{12}$</span>?
</blockquote>
No. In fact, it is not true in general, for a system of two charged particles, that the force acting on charge 1 and the force acting on charge 2 obey Newton's third law at a particular time, given a particular frame of reference. In general there is radiation, Coulomb's law is false, and momentum is exchanged between the charges and the radiation.
|
It is worth repeating that <strong>laws</strong> in physics are <strong>axioms</strong>, there is no proof or derivation other than that the law is necessary, so that a physical mathematical theory can choose those solutions that will <strong>fit existing data</strong> and, important, will be <strong>predictive</strong> in new situations. Laws in effect are a distillate of data.
Coulomb's law defines one of the possible forces, so that Newton's laws can be used in order to have classical mechanics solutions and predictability in kinematic problems involving charges.
|
https://physics.stackexchange.com
|
736,210
|
[
"https://physics.stackexchange.com/questions/736210",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/346417/"
] |
My question is in regards to the variational principle in approximating the wavefunction of Helium.
<strong>Some Background:</strong>
<span class="math-container">$$\hat{H}=-\frac{\hbar^2}{2m_{e}}\nabla_{1}^{2}-\frac{\hbar^2}{2m_{e}}\nabla_{2}^{2}-\frac{Ze^2}{4\pi\epsilon_{0}r_{1}}-\frac{Ze^2}{4\pi\epsilon_{0}r_{2}}+\frac{e^2}{4\pi\epsilon_{0}|r_{1}-r_{2}|}$$</span>
<span class="math-container">$$\hat{H}\Psi(\vec{r_{1}},\vec{r_{2}}) =E\Psi(\vec{r_{1}},\vec{r_{2}})$$</span>
For many electron atoms such as helium, the (nonrelativistic) electronic TISE cannot be solved analytically as a result of the repulsion term in the electronic Hamiltonian. The first approximation to the wavefunction of helium is done by completely neglecting the repulsive term. Neglection of the repulsive term in the electronic Hamiltonian is called the orbital approxiamtion and it allows for the Hamiltonian to separate into the sum of single electron Hamiltonians and the total wavefunction to take the form as the product of single-electron wavefunctions.
<span class="math-container">$$\hat{H}=-\frac{\hbar^2}{2m_{e}}\nabla_{1}^{2}-\frac{\hbar^2}{2m_{e}}\nabla_{2}^{2}-\frac{Ze^2}{4\pi\epsilon_{0}r_{1}}-\frac{Ze^2}{4\pi\epsilon_{0}r_{2}}=\hat{H}_{1}+\hat{H}_{2}$$</span>
<span class="math-container">$$\Psi(\vec{r_{1}},\vec{r_{2}})=\psi(\vec{r_{1}})\psi(\vec{r_{2}})$$</span>
<span class="math-container">$$\psi(\vec{r_{1}})=\frac{1}{\sqrt{\pi}}\left(\frac{Z}{a_{0}}\right)^{\frac{3}{2}}e^{-\frac{Z}{a_{0}}r_{1}}$$</span><span class="math-container">$$\psi(\vec{r_{2}})=\frac{1}{\sqrt{\pi}}\left(\frac{Z}{a_{0}}\right)^{\frac{3}{2}}e^{-\frac{Z}{a_{0}}r_{2}}$$</span>
This approximation is poor, however, one such way of optimizing the approximation is by introducing an effective nuclear charge <em>ζ</em> in place of the nuclear charge <em>Z</em>. The effective nuclear charge is a parameter that can be optimized using the variational principle to produce the lowest energy.
<span class="math-container">$$E=\frac{\int\phi^*\hat{H}\phi d\tau }{\int\phi^*\phi d\tau}$$</span>
<strong>So here is my question:</strong>
When the variational principal is applied, the trial wavefunction <em>ϕ</em> is the product of single-electron wavefunctions derived from the orbital approximation with <em>ζ</em> as a parameter, however the Hamiltonian that is used is the full electronic Hamiltonian (including the repulsive term). Why is the full Hamiltonian used in the variational principle instead of the approximate Hamiltonain?
|
<span class="math-container">$\newcommand{\bra}[1]{\langle #1 \rvert}$</span>
<span class="math-container">$\newcommand{\ket}[1]{\lvert #1 \rangle}$</span>
<span class="math-container">$\newcommand{\amat}[4]{\left(\begin{matrix}#1 & #2 \\ #3 & #4 \end{matrix}\right)}$</span>
<blockquote>
Why is the full Hamiltonian used in the variational principle instead of the approximate Hamiltonian?
</blockquote>
Because we seek an approximate solution to the full Hamiltonian. Any approximate solution to the ground state will have a higher energy, so we can hope to effectively and practically use the variation principle to find a state that is "close" to the ground state by minimizing the expectation value of the Hamiltonian with respect to the parameters of the trial solution.
One typical formulation of the variation principle takes a parametrized function <span class="math-container">$\psi$</span> (subject to <span class="math-container">$\bra{\psi}\psi\rangle=1$</span>) and constructs:
<span class="math-container">$$
\bra{\psi}\hat H\ket{\psi}\tag{1}
$$</span>
The quantity in Eq. (1) is then varied. If the variation is completely arbitrary, the variation recovers the TISE and its solution. If not, the function is not an exact solution and the variational estimate in Eq. (1) is always an upper bound for the lowest energy eigenvalue. Thus the variation principle may be especially useful for determining the ground state. (Upper bounds on the higher energy states can also be determined by using trial wavefunctions that are orthogonal to the exact eigenfunction of lower states). (See Bethe and Jackiw "Intermediate Quantum Mechanics" at pages 8-10, and 48.)
|
This is because the variational approach seeks an optimal wavefunction for a <em>fixed</em> Hamiltonian.
What is unusual about the helium atom is that the guess function is a solution to the problem <em>with interactions</em> but its form is very closely related to the form of the exact solution <em>without inteaction</em>. The interactions here are <em>fixed</em>; only the function is parametrized.
|
https://physics.stackexchange.com
|
1,562,294
|
[
"https://math.stackexchange.com/questions/1562294",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/244966/"
] |
(Quant Job interviews Questions and Answers Q3.22)
<blockquote>
Suppose we have an ant travelling on edges of a cube going from one vertex to the other. The ant never stops and it takes it one minute to go along one edge. At every vertex the ant randomly picks one of the three available edges and starts going along that edge. We pick a vertex of the cube and puts the and there. What is the expected number of minutes that it will take to treturn to that vertex?
</blockquote>
Reading up on this it seems I need to learn about Markov chains to answer this properly, however for now here is my incorrect attempt : please could you tell me why it is incorrect ?
<blockquote>
Assuming 'randomly' means uniformly with $p=\frac{1}{3}$ , and observing that returning to the vertex must take an even number of steps then by brute force:
$$ \mathbb E (T) = \sum_{n=1}^{\infty} 2n * (\frac{1}{3})^{2n} = 2 \sum_{n=1}^{\infty} n* (\frac{1}{9})^{n} = 2 \frac{1\over9}{({1-{1\over9}})^2} = {9 \over 4} $$
</blockquote>
|
Look at the distance from the origin after an even number $2k$ of minutes.
This distance can only be 0 or 2. In two minutes the ant can do the following :
$0 \rightarrow 1 \rightarrow 0$ with probability $1\cdot1/3$
$0 \rightarrow 1 \rightarrow 2$ with probability $1\cdot2/3$
$2 \rightarrow 1 \rightarrow 0$ with probability $2/3\cdot1/3$
$2 \rightarrow 3 \rightarrow 2$ with probability $1/3\cdot1$
$2 \rightarrow 1 \rightarrow 2$ with probability $2/3\cdot2/3$
The probability of going from distance 2 to distance 2 in 2 minutes without passing the start vertex is then $7/9$.
The probability of returning after $2k$ minutes is then
$p_{2}=1/3 $
$p_{2k}=\frac{2}{3}\cdot\left(\frac{7}{9}\right)^{k-2}\cdot\frac{2}{9}$ when $k\geq2$, we can check that these actually sum to 1.
The expectation time to return to the origin is then
$$ E(T) = \frac{2}{3} +\sum_{k=2}^{\infty}\frac{8k}{27}\left( \frac{7}{9}\right)^{k-2} = 8 $$
|
To model this as a Markov chain, let $$S=\{(0,0,0),(0,1,0),(0,0,1),(0,1,1),(1,0,0),(1,1,0),(1,0,1),(1,1,1)\}$$ and $P$ an $8\times8$ matrix with $P_{ij}=\frac13$ if $i$ and $j$ vary in exactly one digit, $0$ otherwise. Let $\{X_n:n\geqslant 0\}$ satisfy $$\mathbb P(X_{n+1}=j\mid X_0=i_1, \ldots, X_{n-1}=i_{n-1},X_n=i)=\mathbb P(X_{n+1}\mid X_n=i)=P_{ij}. $$
Then $\{X_n:n\geqslant 0\}$ is a Markov chain, and by symmetry, both the rows and columns of $P$ sum to $1$. Since $S$ is finite, $X$ has the unique stationary distribution $\pi$ being the uniform distribution over $S$ (i.e. $\pi_i = \frac18$ for each $i\in S$). Let
$$\tau_{ij} = \inf\{n>0:X_n=j\mid X_0=i\} $$
for each $i,j\in S$. It is known that $$\mathbb E[\tau_{ii}] = \frac1{\mathbb\pi_i}, $$
so in this case the expected number of minutes to return to a vertex would be $8$.
|
https://math.stackexchange.com
|
559,643
|
[
"https://electronics.stackexchange.com/questions/559643",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/248108/"
] |
As an electronics hobbyist, I've already built a thing or two so this didn't seem like a complicated thing to do, but I was terribly mistaken. I wanted to build an FM modulated radio transceiver controlled by an Arduino board that would work anywhere between 86 and 520 MHz so that it'd include normal FM radio, VHF and UHF amateur bands and PMR and CB channels.
I expected there to be a miracle IC that would just require an audio and carrier wave input, rf amp and antenna, or that there would be plenty of similar projects already done that I could bounce off of, but hours of research gave me no answers, just more questions.
I came here to ask why are the radios built always in specific bands like 136-148/200-260/400-430 MHz instead of working continuously - is there a legislative or physical limitation? And my second question is whether is there a way to approach this problem that would be friendly for someone who usually works with digital stuff (like an IC or module) instead of analog/radio electronics.
Thanks.
EDIT: Thank you all for your time, you were very helpful.
|
<blockquote>
I expected there to be a miracle IC that would just require an audio and carrier wave input
</blockquote>
Ah, but FM actively modulates a carrier; you can of course use the external oscillation input as carrier for a superhet design, but then you'll still need to generate the FM-modulated IF <em>and</em> suppress the leakage of the oscillator at the output (wíthout suppressing the frequency-varying intended carrier). That's a rather complex thing to do in a single IC.
<blockquote>
I came here to ask why are the radios built always in specific bands like 136-148/200-260/400-430 MHz instead of working continuously - is there a legislative or physical limitation?
</blockquote>
Yes :D
both, mostly!
Also: If you only offer specific bands as device manufacturer, you don't have to guarantee performance in between. So, since it's not a big market you'd target with anything that's not commercially legal to do (the couple thousand ham rigs you could sell at most ... pffft).
When you do a superhet FM transmitter, you produce RF at <span class="math-container">\$f_{\text{LO}} \pm f_{IF}\$</span> (and of course other harmonics/intermodulation products), where your message signal is actually a frequency-changing <span class="math-container">\$f_{IF}\$</span>, but you only want the sum, not the difference (or vice versa); to isolate the sum frequency for a clean signal, you will need to filter everything below <span class="math-container">\$f_{LO}+f_{IF}\$</span>. That only works with a fixed filter bank if you can't pick from more than an octave of <span class="math-container">\$f_{LO}\$</span>.
<blockquote>
And my second question is whether is there a way to approach this problem that would be friendly for someone who usually works with digital stuff (like an IC or module) instead of analog/radio electronics.
</blockquote>
Sure; you could generate an IF signal with e.g. a microcontroller (FM modulation of a carrier between say 100 Hz and 75.1 kHz is not that mathematically hard to do); then, mix that up with about any LO (you can buy digitally controllable oscillators, Silabs has such) using about any mixer (SA612 is certainly a classic). Then, you get all the intermodulation products, and your filtering needs to select the one you actually want to transmit.
A <strong>very</strong> digital way to do that is the rpi_tx software, which uses the PWM units on a raspberry Pi SoC as generator for an RF signal; you'll have to add a solid amount of filtering to get rid of these harmonics you <em>don't</em> want (you only want exactly one of them).
|
Most of the answers are treating the transmit side. Basically however, the fundamental problem is that building such wideband tranceivers, fm, am or whatever is just very difficult.
I suspect that you are not going to find any transceivers that cover a continuous range up to 500 MHz. If you did, they would probably include FM along with everything else cuz once you have the basic circuitry in place for a transceiver, it is easy to add different modes.
On the legislative side, there is a sharp difference between broadcast FM and FM for voice communications. Basically, FM designed for voice is narrow band, it does not occupy a much greater bandwidth then a similar AM signal. It is a little bit better from a noise perspective than AM., but it is quite spectrally efficient.
Broadcast FM, on the other hand, occupies a much greater bandwidth because it uses a much higher deviation ratio. This provides much better noise suppression when you have a good strong signal. However a broadcast FM signal is somewhere around 100 or 120 kilohertz wide (I cannot remember exactly).
|
https://electronics.stackexchange.com
|
9,302
|
[
"https://physics.stackexchange.com/questions/9302",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1927/"
] |
I think I saw in a video that if dark matter wasn't repulsive to dark matter, it would have formed dense massive objects or even black holes which we should have detected.
So, could dark matter be repulsive to dark matter? If so, what are the reasons? Could it be like the opposite pole of gravity that attracts ordinary matter and repulses dark matter?
|
Lubos Motl's answer is exactly right. Dark matter has "ordinary" gravitational properties: it attracts other matter, and it attracts itself (i.e., each dark matter particle attracts each other one, as you'd expect).
But it's true that dark matter doesn't seem to have collapsed into very dense structures -- that is, things like stars and planets. Dark matter does cluster, collapsing gravitationally into clumps, but those clumps are much larger and more diffuse than the clumps of ordinary matter we're so familiar with. Why not?
The answer seems to be that dark matter has few ways to dissipate energy. Imagine that you have a diffuse cloud of stuff that starts to collapse under its own weight. If there's no way for it to dissipate its energy, it can't form a stable, dense structure. All the particles will fall in towards the center, but then they'll have so much kinetic energy that they'll pop right back out again. In order to collapse to a dense structure, things need the ability to "cool."
Ordinary atomic matter has various ways of dissipating energy and cooling, such as emitting radiation, which allow it to collapse and not rebound. As far as we can tell, dark matter is weakly interacting: it doesn't emit or absorb radiation, and collisions between dark matter particles are rare. Since it's hard for it to cool, it doesn't form these structures.
|
Dark matter surely has to carry a positive mass, and by the equivalence principle, all positive masses have to exert attractive gravity on other masses.
Also, from the viewpoint of phenomenological cosmology, we obviously want dark matter to attract itself. It has to attract visible matter because this is why dark matter was introduced in the first place: it helps to keep the stars in a galaxy even though they're orbiting more quickly than one would expect from the distribution of visible mass in the galaxy.
For this reason, the force between dark matter and ordinary is surely attractive. The force between dark matter and dark matter has to be attractive, too. In fact, dark matter has played the dominant role in the structure formation - the creation of the initial non-uniformities that ultimately became galaxies, clusters of galaxies, and so on. The dark matter halos are larger than the visible parts of the galaxies: the visible stars arose as "cherries on the pie" near the centers of the dark matter halos.
There's no doubt that the gravitational force between any pair of particle-like entities is attractive. This is linked to the positive mass i.e. positive energy - which is needed for stability of the vacuum (if there existed negative-energy states, the vacuum would decay into them spontaneously which would be catastrophic and is not happening) - and the basic properties of general relativity. In particular, there's a lot of confusion among the laymen whether antimatter has an attractive gravity. Yes, of course, the matter-antimatter and antimatter-antimatter gravitational forces are known to be attractive, too.
The non-gravitational forces between dark matter are almost certainly short-range forces. In particular, dark matter doesn't interact with electromagnetism, the only long-range non-gravitational force (mediated by a massless photon) we know - that's why it's dark (emits no light).
The only repulsive force that arises in similar cosmological discussions is one due to dark energy - or the cosmological constant, to be more specific. Dark energy is something very different than dark matter. This force makes the expansion of the Universe accelerate and it is due to the negative pressure of dark energy which may be argued to cause this "repulsive gravity". However, dark energy is not composed of any particles. It's just a number uniformly attached to every volume of space.
|
https://physics.stackexchange.com
|
138,582
|
[
"https://cs.stackexchange.com/questions/138582",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/134947/"
] |
I was looking to solve this reduction, but I dont see how to construct the new graph. It seems very simple but I'm not capable of do it.
I give you the complete explanation about this reduction.
We consider a variant of the independent set problem which we shall call, Independent Set with
a Fixed Node, in which the input contains additionally a vertex <span class="math-container">$u$</span> and it is required that the
independent set contains <span class="math-container">$u$</span>.
|
As I understand, your problem is a decision problem defined as such:
Independant set with fixed vertex (ISFV):
<ul>
<li>Input: a graph <span class="math-container">$G = (V, E)$</span>, a vertex <span class="math-container">$u \in V$</span>, an integer <span class="math-container">$k$</span>.</li>
<li>Question: is there an independent set of size <span class="math-container">$k$</span> containing <span class="math-container">$u$</span>?</li>
</ul>
Independent set (IS) is defined as:
<ul>
<li>Input: a graph <span class="math-container">$G = (V, E)$</span>, an integer <span class="math-container">$k$</span>.</li>
<li>Question: is there an independent set of size <span class="math-container">$k$</span>?</li>
</ul>
Suppose you can solve ISFV. Then you can solve IS by running ISFV for each <span class="math-container">$u\in V$</span> and checking if the answer is yes for any <span class="math-container">$u\in V$</span>. Since there are a polynomial number of vertices, the reduction is indeed polynomial.
Another way to do it is to construct the graph <span class="math-container">$G' = (V\cup\{u\}, E)$</span> (adding a vertex with no other edge), and check ISFV with <span class="math-container">$G'$</span>, <span class="math-container">$u$</span> and <span class="math-container">$k + 1$</span>, since the vertex <span class="math-container">$u$</span> can always be added to an independent set.
|
Suppose that we are given a graph <span class="math-container">$G$</span> and want to know whether it has an independent set of size <span class="math-container">$k$</span> containing <span class="math-container">$u$</span>. Such an independent set cannot contain any neighbor of <span class="math-container">$u$</span>, and so it is not hard to check that <span class="math-container">$G$</span> contains such an independent set iff the graph obtained by removing <span class="math-container">$u$</span> and all of its neighbors contains an independent set of size <span class="math-container">$k-1$</span>.
We can also reduce in the other direction by adding a dummy vertex which is disconnected from the rest of the graph.
|
https://cs.stackexchange.com
|
337,385
|
[
"https://math.stackexchange.com/questions/337385",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/67928/"
] |
Suppose we have matrices $A, B, C$ of dimensions $m \times n, m\times n, n \times l$ respectively. How can we prove $(A+B)\circ C = A\circ C + B \circ C$ (using the summation notation method?)
|
Tedious way...coefficientwise:
$$
((A+B)C)_{i,j}=\sum_{k=1}^n(A+B)_{i,k}C_{k,j}=\sum_{k=1}^n(A_{i,k}+B_{i,k})C_{k,j}
$$
$$
=\sum_{k=1}^nA_{i,k}C_{k,j}+B_{i,k}C_{k,j}=\sum_{k=1}^nA_{i,k}C_{k,j}+\sum_{k=1}^nB_{i,k}C_{k,j}
$$
$$
(AC)_{i,j}+(BC)_{i,j}=(AC+BC)_{i,j}
$$
for all $1\leq i\leq n$ and all $1\leq j\leq l$. So $(A+B)C=AC+BC$.
|
<strong>Hint:</strong> Look at the linear maps represented by $A$, $B$ and $C$.
|
https://math.stackexchange.com
|
330,602
|
[
"https://softwareengineering.stackexchange.com/questions/330602",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/82182/"
] |
the QA team should ideally do their testing on an environment that almost exactly matches the prod env (to minimize uncaught bugs that arise due to setting differences).
If that's true, does the QA team typically do the testing again on prod post deployment in the googles and amazons out there? If the answer is yes that sounds redundant.. but also how can they not test production?
|
Depends what you're doing, but often you can't test in production because the system is now attached to real resources. In the case of Amazon, would you run a real order with a real credit card and wait for the book to arrive? You often need to be careful about putting test data into a production system.
Once it's gone live, you're dependent on your monitoring systems to give you early warnings of bugs. Sudden drop in completed orders? Better roll back the update then go digging in the logs to find out what happened.
(Arguably what SpaceX have been doing is "testing in production" with their rocket recovery system: it's not really cost feasible to do dummy launches, so they launch with the real payload and then see if the system can manage to land the rocket.)
|
You don't test in production because you won't find anything you don't find in a proper test environment. Maybe some kind of quick check if deployment went ok, but this is not a big part of testing.
First you test your code (unit test).
Then you test your code in the whole application (integration test).
Then you test your code in the whole system landscape (system test).
All further testing is done by people who don't gain anything from finding or reporting bugs, which - when you think about it - sucks for the quality of the product.
|
https://softwareengineering.stackexchange.com
|
964,509
|
[
"https://math.stackexchange.com/questions/964509",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/181674/"
] |
In isosceles trapezoid $ABCD$, $AB=6$, $BC=9$, $CD=8$, and $AD=9$ find the distance from point $D$ to $BC$ (perpendicular).
|
Here are some questions you should be able to answer, which will help you to find the answer you're looking for.
<ol>
<li>What is the distance from $AB$ to $CD$? (<em>Hint</em>: Use Pythagorean Theorem.)</li>
<li>What is the area of triangle $BCD,$ given the answer to the first question?</li>
<li>If we consider $BC$ as the "base" of the triangle $BCD,$ what is the answer to your question, given the answer to the second question? (<em>Hint</em>: Try rotating your picture. What is the height of the triangle $BCD$ if $BC$ is the base?)</li>
</ol>
If you have trouble answering any of these questions, let me know, but give them your best shot.
|
A picture can help a lot:
$\hspace{3cm}$<img src="https://i.stack.imgur.com/00Dtl.png" alt="enter image description here">
<strong>Hint:</strong> Find a couple of similar right triangles (one of which you know a couple of sides).
|
https://math.stackexchange.com
|
71,509
|
[
"https://physics.stackexchange.com/questions/71509",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/27232/"
] |
The classic example of an indeterministic system is a radioactive isotope, e.g. the one that kills Schrödinger's cat.
I get there are arguments against hidden variables in quantum mechanics, but how could they be so sure, back in the twenties, that the strong nuclear forces involved in radioactivity were not governed by hidden variables rather than true randomness?
Einstein was very unhappy about the indeterminism of quantum mechanics regarding even well understood effects like Young's slit experiments, but it seems kind of ideological and brash on behalf of Heisenberg & Co to extend the indeterminism over to phenomena they hadn't even begun to understand, like alpha decay.
Is there a reason for this early self-assuredness in postulating indeterminsm?
|
Schrödinger came up with the cat in 1935, which was relatively late in the development of quantum mechanics.
Back in the 1920's there had been a lot more uncertainty. The Copenhagen school had wanted to quantize the atom while leaving the electromagnetic field classical, as formalized in the Bohr-Kramers-Slater (BKS) theory. De Broglie's 1924 thesis included a hypothesis that there were hidden variables involved in the electron. In the 20's virtually nothing was known about the nucleus; the neutron had been theorized but not experimentally confirmed.
But we're talking about 1935. This was after the uncertainty principle, after Bothe-Geiger, after the discovery of the neutron, and after the EPR paper. (Schrödinger proposed the cat in a letter discussing the interpretation of EPR.) By this time it had long ago been appreciated that if you tried to quantize one field but not another (as in BKS), you had to pay a high price (conservation of energy-momentum only on a statistical basis), and experiments had falsified such a mixed picture for electrons interacting with light. It would have been very unnatural to quantize electrons and light, but not neutrons and protons. Neutrons and protons were material particles and therefore in the same conceptual category as electrons -- which had been the <em>first</em> particles to be quantized. Ivanenko had already proposed a nuclear shell model in 1932.
|
It was known that a nucleus existed back in the 20's. If you ever did experiments with nuclear decay you would see the hallmarks of a Poisson process. I am talking about simple undergraduate experiments, which, I guess, that in most countries even theoreticians specializing in other branches have to do before they get their degree (like I did). You could also see that the half-life of a sample does not depend on the size of the sample. This invalidates any claim that there is some unknown deterministic interaction between nuclei which causes the statistical-like behavior, because then the physics would change with the size of the sample. Therefore, one can conclude only that there was some hidden determinism inside the nucleus itself which causes decay to simulate a Poisson process. This would make a nucleus a highly complicated system, indeed. Statistical physics teaches us how equilibrium classical systems behave. Why is the nucleus not at equilibrium with itself?! The nucleus would have to be very special indeed to deviate from Boltzmann' theory. This would require highly unusual behavior and completely unknown physical mechanisms. A theory like that would look very unnatural. It is a much better and more natural conclusion to extend quantum indeterminism known from previously understood experiments to the nucleus itself. In the end this approach proved right. When you actively research a completely new theory you can never be 100% sure that what you are doing is correct until you finish your research. You need to look for the most natural and consistent theory you can and hope for the best. :)
|
https://physics.stackexchange.com
|
20,410
|
[
"https://dba.stackexchange.com/questions/20410",
"https://dba.stackexchange.com",
"https://dba.stackexchange.com/users/2321/"
] |
I am more experienced with SQL Server and Sybase than Oracle, and understand those products well. I've been asked to look for ways to reduce the server estate running Oracle. I understand that an instance in Oracle maps to a database hosting many tablespaces. I have a fairly good grasp of the fundamentals, however if I wanted to consolidate SERVER1,..,SERVER4 running Oracle database into one server what would be the best way to do it physically? I am considering Virtual as well using a DBaaS (Database as a Service) model, but am curious if it can/should be done physically.
Is it possible to have four separate instances point to four separate databases on one machine? Or would I have to merge the four databases into one database on the consolidated server and manage the schemas to ensure there are no name conflicts? If I did that would I have one instance or four?
I have read the documentation but I'm still not 100% sure about this area.
|
You have two options:
<ol>
<li>Run multiple Oracle instances on the same machine</li>
<li>Consolidate all of your Oracle instances into a single instance,
placing the data in separate schemas</li>
</ol>
Since you're familiar with SQL Server/Sybase, I'll explain the difference between them & Oracle as far as databases and users are concerned.
<ul>
<li>A SQL Server database is equivalent to an Oracle Schema. An Oracle schema is owned by a single user</li>
<li>A SQL Server dataserver is equivalent to an Oracle Instance</li>
</ul>
Running 4 instances on one machine is trivial, so I won't explain further.
Consolidating to a single database is also easy if the separate databases don't have conflicting schema names. If they do, it may not be an issue as long as the applications/interfaces/packages don't have hard-coded schema names - it's easy to export data from one schema in a database & import it into a different schema on another database.
|
It is fairly easy to consolidate multiple databases into one real database. In Oracle, a database is the collection of files. Users connect to the database by connecting to an instance. A database can be served by multiple instances, in which case you are running RAC.
So for simply consolidating database into one single database you don't need RAC but you could, if you want/need to do this.
If you are consolidating into one database there are a few things to take into account:
<ol>
<li>namespaces</li>
<li>service level agreements - don't make maintentance impossible by combining conflicting slas.</li>
<li>performance isolation - use Resource Manager to handle this</li>
<li>application isolation - make sure the apps all use their own tns_alias to connect</li>
<li>services - give every application an own service name in the database, if possible.</li>
</ol>
When you have naming conflicts, there is a problem, combination not possible.
You might want to take some downtime for maintenance/upgrades. If you can not get a downtime from all applications at the same time, you have a problem.
Using Resource Manager you can give a certain performance guarantee for the specific services.
Services are a smart thing to use, it is the easiest thing to see how resources are used, compared to one and other.
Easiest is to run multiple instances on a single server, each serving it's own database. This is the easiest but not the smartest thing to do. Smartest is to have a single instance on a single server. This is because every instance considers itself as the master of the server. You can not very easy isolate their resource usage, as you can do in a single instance. If you want to give some performance guarantee in a multiple instance server, the setup will grow in complexity because in many cases you need to start multiple projects and users to run your databases under.
A single database can easily support a few hundred applications, a lot cheaper than using a few hundred databases. This quickly saves BIG money on a yearly basis by making good use of Oracle features.
|
https://dba.stackexchange.com
|
260,257
|
[
"https://mathoverflow.net/questions/260257",
"https://mathoverflow.net",
"https://mathoverflow.net/users/100898/"
] |
Let $a,q$ be a positive integers. I am trying to evaluate the following sum:
$\sum_{\substack{1<a<q \\(a,q)>1 \\ (a+1,q)>1}}1$. Is there a formula that exists to calculate such sums?
Here is an example:
Let $q=15$.
We have the following multiple of divisors:
1,2,3,5,6,9,10,12,15.
But since the two pairs (5,6) and (9,10) are consecutives numbers bigger than one but less than q, we get :
$\sum_{\substack{1<a<15 \\(a,15)>1 \\ (a+1,15)>1}}1=2$.
|
At first, we count the number of residues $a$ for which both $a$ and $a+1$ are coprime with $n$. Let $q=\prod p_i^{k_i}$ be a factorization of $q$. For any $p_i$, there exist $p_i-2$ admissible remainders modulo $p_i$ (forgotten remainders are 0 and $-1$), thus $(p_i-2)p_i^{k_i-1}$ admissible remainders modulo $p_i^{k_i}$, thus by Chinese Remainders Theorem the answer equals $$F(q):=\prod (p_i-2)p_i^{k_i-1}=q\prod(1-2/p_i).$$
Now your question. There exist $\varphi(q)$ residues $a$ for which $(a,q)=1$, as many residues for which $(a+1,q)=1$, $F(q)$ residues for which both $(a,q)=1$, $(a+1,q)=1$. Thus there exist $\varphi(q)-F(q)$ residues $a$ for which $(a,q)=1$ and $(a+1,q)>1$. But the total number of $a$ for which $(a+1,q)>1$ equals $q-\varphi(q)$. Therefore the answer to your initial question is $q-2\varphi(q)+F(q)$.
|
What you have written down is already a formula for calculating the sum, so really you need to be more precise about what the question is.
But here are some comments which give a simpler formula in the case where $q$ is only divisible by a few primes.
If $q$ is a prime power $p^e$, then the sum is zero, because $(a,q)>1$ if and only if $p$ divides $a$, and that can't happen twice in a row.
If, as in your example, $q$ has two prime divisors $p_1$ and $p_2$, then $q=p_1^{e_1}p_2^{e_2}$ and what must be going on is that one of the two terms $a,a+1$ is a multiple of $p_1$ and the other a multiple of $p_2$. Hence $a$ either satisfies $a=0$ mod $p_1$ and $a=-1$ mod $p_2$ or $a=-1$ mod $p_1$ and $a=0$ mod $p_2$. In each case there is one solution mod $p_1p_2$ by the Chinese Remainder Theorem, and hence $q/p_1p_2=p_1^{e_1-1}p_2^{e_2-1}$ solutions between $1$ and $q$, giving us $2q/p_1p_2$ solutions in this case.
In the general case there is a problem though. Say three primes $p_1,p_2,p_3$ divide $q$. Then we are interested in solving $a=-1$ mod $p_1$ and $a=0$ mod $p_2$ ($q/p_1p_2$ solutions) OR $a=-1$ mod $p_1$ and $a=0$ mod $p_3$ ($q/p_1p_3$ solutions) OR... etc etc, so $3\times2=6$ possibilities giving what looks like $2q(1/p_1p_2+1/p_2p_3+1/p_3p_1)=2q(p_1+p_2+p_3)/(p_1p_2p_3)$ solutions. However unfortunately we have counted some solutions twice here -- there is one number mod $p_1p_2p_3$ which is $-1$ mod $p_1$ and $0$ mod $p_2$ and $p_3$ and we counted it too often. For three or more primes dividing $q$ it's hence messier and I'm not sure there's a simple formula.
Here's the explicit answer when 3 primes divide $q$. We may as well assume $q$ is squarefree (just multiply the answer by $q/p_1p_2p_3$ otherwise). The number of numbers between 1 and $q$ which are $0$ mod $p_1$ and $-1$ mod $p_2$ and congruent to $*$, neither $0$ nor $-1$, mod $p_3$ is $p_3-2$. Similarly for $(-1,0,*)$, $(-1,*,0)$ etc etc giving us $2(p_1+p_2+p_3)-12$. But now you need to count the number of times we are $(a,b,c)$ with $a,b,c$ all either $0$ or $-1$, but not all the same; this gives a further 6. So in this case we get $2(p_1+p_2+p_3)-6$.
The general case will be messier and I don't know if one can do better than this method.
|
https://mathoverflow.net
|
59,062
|
[
"https://dba.stackexchange.com/questions/59062",
"https://dba.stackexchange.com",
"https://dba.stackexchange.com/users/9353/"
] |
I have a table (not designed by me) which has 20 variably named columns. That is, depending on what type of record you are looking at, the applicable name of the column can change.
The possible column names are stored in another table, that I can query very easily.
Therefore, the query I'm really looking for goes something like this:
<pre><code>SELECT Col1 AS (SELECT ColName FROM Names WHERE ColNum = 1 and Type = @Type),
Col2 AS (SELECT ColName FROM Names WHERE ColNum = 2 and Type = @Type)
FROM Tbl1
WHERE Type = @Type
</code></pre>
Obviously that doesn't work, so how can I get a similar result?
'<s>
I've tried building a query string and <code>EXECUTE</code>ing it, but that just returns "Command(s) Completed Successfully" and doesn't seem to return a rowset.
</s>
It turns out I was using an incorrect query to build the dynamic SQL and as such built an empty string. SQL Server definitely executed the empty string correctly.
Note that the reason I need this to occur, rather than simply hard coding the column names, is that the column names are user configurable.
|
Try the following code:
<pre><code>CREATE TABLE #Names
(
[Type] VARCHAR(50),
ColNum SMALLINT,
ColName VARCHAR(50),
ColDataType VARCHAR(20)
)
INSERT INTO #Names VALUES
('Customer', 1, 'CustomerID', 'INT'),
('Customer', 2, 'CustomerName', 'VARCHAR(50)'),
('Customer', 3, 'CustomerJoinDate', 'DATE'),
('Customer', 4, 'CustomerBirthDate', 'DATE'),
('Account', 1, 'AccountID', 'INT'),
('Account', 2, 'AccountName', 'VARCHAR(50)'),
('Account', 3, 'AccountOpenDate', 'DATE'),
('CustomerAccount', 1, 'CustomerID', 'INT'),
('CustomerAccount', 2, 'AccountID', 'INT'),
('CustomerAccount', 3, 'RelationshipSequence', 'TINYINT')
CREATE TABLE #Data
(
[Type] VARCHAR(50),
Col1 VARCHAR(50),
Col2 VARCHAR(50),
Col3 VARCHAR(50),
Col4 VARCHAR(50),
Col5 VARCHAR(50),
Col6 VARCHAR(50),
Col7 VARCHAR(50)
)
INSERT INTO #Data VALUES
('Customer', '1', 'Mr John Smith', '2005-05-20', '1980-11-15', NULL, NULL, NULL),
('Customer', '2', 'Mrs Hayley Jones', '2009-10-10', '1973-04-03', NULL, NULL, NULL),
('Customer', '3', 'ACME Manufacturing Ltd', '2012-12-01', NULL, NULL, NULL, NULL),
('Customer', '4', 'Mr Michael Crocker', '2014-01-13', '1957-01-23', NULL, NULL, NULL),
('Account', '1', 'Smith-Jones Cheque Acct', '2005-05-25', NULL, NULL, NULL, NULL),
('Account', '2', 'ACME Business Acct', '2012-12-01', NULL, NULL, NULL, NULL),
('Account', '3', 'ACME Social Club', '2013-02-10', NULL, NULL, NULL, NULL),
('Account', '4', 'Crocker Tipping Fund', '2014-01-14', NULL, NULL, NULL, NULL),
('CustomerAccount', '1', '1', '1', NULL, NULL, NULL, NULL),
('CustomerAccount', '2', '1', '2', NULL, NULL, NULL, NULL),
('CustomerAccount', '2', '3', '2', NULL, NULL, NULL, NULL),
('CustomerAccount', '3', '2', '1', NULL, NULL, NULL, NULL),
('CustomerAccount', '3', '3', '1', NULL, NULL, NULL, NULL),
('CustomerAccount', '4', '2', '2', NULL, NULL, NULL, NULL),
('CustomerAccount', '4', '4', '1', NULL, NULL, NULL, NULL)
DECLARE @Type VARCHAR(50) = 'Account' -- Or Customer, or CustomerAccount
DECLARE @SQLText NVARCHAR(MAX) = ''
SELECT @SQLText += 'SELECT '
SELECT @SQLText += ( -- Add in column list, with dynamic column names.
SELECT 'CONVERT(' + ColDataType + ', Col' + CONVERT(VARCHAR, ColNum) + ') AS [' + ColName + '],'
FROM #Names
WHERE [Type] = @Type FOR XML PATH('')
)
SELECT @SQLText = LEFT(@SQLText, LEN(@SQLText) - 1) + ' ' -- Remove trailing comma
SELECT @SQLText += 'FROM #Data WHERE [Type] = ''' + @Type + ''''
PRINT @SQLText
EXEC sp_executesql @SQLText
</code></pre>
This returns the SELECT statement: <code>SELECT CONVERT(INT, Col1) AS [AccountID],CONVERT(VARCHAR(50), Col2) AS [AccountName],CONVERT(DATE, Col3) AS [AccountOpenDate] FROM #Data WHERE [Type] = 'Account'</code>
|
This sounds prime for a front end display solution. Query 1 would pull back your data, Query 2 would pull back the column names and in code when you build what ever structure you use to display you set the headers from the second query.
While a Pure SQL Method may be possible it will be dynamic SQL and code maintnence would be a nightmare.
Also your probably looking for <code>sp_executesql</code> and not just <code>EXECUTE N'Query String'</code> as that may fix your issue of command completed successfully.
|
https://dba.stackexchange.com
|
151,163
|
[
"https://stats.stackexchange.com/questions/151163",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/-1/"
] |
<blockquote>
Implement an estimator using Monte Carlo integration of the quantity
$$\theta=\int_0^1e^{-x^2}(1-x)dx$$ Estimate $\theta$ with a variance
lower than $10^{-4}$ by writing the variance of this estimator depending on
sample size.
</blockquote>
We can write
$$\theta=\int \phi(x)f(x)dx$$
where $\phi(x)$ is a function and $f(x)$ is a density so that $$\phi(x)f(x)=e^{-x^2}(1-x)\mathbb{I}_{(0,1)}(x)$$ The exercise leaves open the choice of the density.
Thus the estimator has the form $$\hat{\theta}=\frac{1}{n}\sum_i \phi(x_i)$$
The exercise asks for an estimate of $\theta$ with variance lower than $0.0001$ by expressing the variance of the estimator as a function of n.
|
The problem is that without knowing exactly what <span class="math-container">$\theta$</span> is, we cannot know the variance of its Monte-Carlo estimator. The solution is to <em>estimate</em> that variance and hope the estimate is sufficiently close to the truth.
<hr />
<strong>The very simplest form of Monte-Carlo estimation</strong> surrounds the graph of the integrand, <span class="math-container">$f(x) = e^{-x^2}(1-x)$</span>, by a box (or other congenial figure that is easy to work with) of area <span class="math-container">$A$</span> and places <span class="math-container">$n$</span> independent uniformly random points in the box. The proportion of points lying under the graph, times the area <span class="math-container">$A$</span>, estimates the area <span class="math-container">$\theta$</span> under the graph. As usual, let's write this estimator of <span class="math-container">$\theta$</span> as <span class="math-container">$\hat\theta$</span>. For examples, see the figure at the end of this post.
Because the chance of any point lying under the graph is <span class="math-container">$p = \theta / A$</span>, the count <span class="math-container">$X$</span> of points lying under the graph has a Binomial<span class="math-container">$(n, p)$</span> distribution. This has an expected value of <span class="math-container">$np$</span> and a variance of <span class="math-container">$np(1-p)$</span>. The variance of the estimate therefore is
<span class="math-container">$$\text{Var}(\hat \theta) = \text{Var}\left(\frac{AX}{n}\right) = \left(\frac{A}{n}\right)^2\text{Var}(X) = \left(\frac{A}{n}\right)^2 n \left(\frac{\theta}{A}\right)\left(1 - \frac{\theta}{A}\right) = \frac{\theta(A-\theta)}{n}.$$</span>
Because we do no know <span class="math-container">$\theta$</span>, we first use a small <span class="math-container">$n$</span> to obtain an initial estimate and plug that into this variance formula. (A good educated guess about <span class="math-container">$\theta$</span> will serve well to start, too. For instance, the graph (see below) suggests <span class="math-container">$\theta$</span> is not far from <span class="math-container">$1/2$</span>, so you could start by substituting that for <span class="math-container">$\hat\theta$</span>.) This is the <em>estimated variance</em>,
<span class="math-container">$$\widehat{\text{Var}}(\hat\theta) = \frac{\hat\theta(A-\hat\theta)}{n}.$$</span>
Using this initial estimate <span class="math-container">$\hat\theta$</span>, find an <span class="math-container">$n$</span> for which <span class="math-container">$\widehat{\text{Var}}(\hat\theta) \le 0.0001 = T$</span>. The smallest possible such <span class="math-container">$n$</span> is easily found, with a little algebraic manipulation of the preceding formula, to be
<span class="math-container">$$\hat n = \bigg\lceil\frac{\hat\theta(A - \hat\theta)}{T}\bigg\rceil.$$</span>
Iterating this procedure eventually produces a sample size that will at least approximately meet the variance target. As a practical matter, at each step <span class="math-container">$\hat n$</span> should be made sufficiently greater than the previous estimate of <span class="math-container">$n$</span> so that eventually a large enough <span class="math-container">$n$</span> is guaranteed to be found for which <span class="math-container">$\widehat{\text{Var}}(\hat\theta)$</span> is sufficiently small. For instance, if <span class="math-container">$\hat n$</span> is less than twice the preceding estimate, use twice the preceding estimate instead.
<hr />
<strong>In the example in the question,</strong> because <span class="math-container">$f$</span> ranges from <span class="math-container">$1$</span> down to <span class="math-container">$0$</span> as <span class="math-container">$x$</span> goes from <span class="math-container">$0$</span> to <span class="math-container">$1$</span>, we may surround its graph by a box of height <span class="math-container">$1$</span> and width <span class="math-container">$1$</span>, whence <span class="math-container">$A=1$</span>.
One calculation beginning at <span class="math-container">$n=10$</span> first estimated the variance as <span class="math-container">$2/125$</span>, resulting in a guess <span class="math-container">$\hat n = 1600$</span>. Using <span class="math-container">$1600$</span> new points (I didn't even bother to recycle the original <span class="math-container">$10$</span> points) resulted in an updated estimated variance of <span class="math-container">$0.0001545$</span>, which was still too large. It suggested using <span class="math-container">$\hat n = 2473$</span> points. The calculation terminated there with <span class="math-container">$\hat\theta = 0.4262$</span> and <span class="math-container">$\widehat{\text{Var}}(\hat\theta) = 0.00009889$</span>, just less than the target of <span class="math-container">$0.0001$</span>. The figure shows the random points used at each of these three stages, from left to right, superimposed on plots of the box and the graph of <span class="math-container">$f$</span>.
<img src="https://i.stack.imgur.com/ieA5s.png" alt="Figure" />
Since the true value is <span class="math-container">$\theta = 0.430764\ldots$</span>, the true variance with <span class="math-container">$n=2473$</span> is <span class="math-container">$\theta(1-\theta)/n = 0.00009915\ldots$</span>. (Another way to express this is to observe that <span class="math-container">$n=2453$</span> is the smallest number for which the true variance is less than <span class="math-container">$0.0001$</span>, so that using the estimated variance in place of the true variance has cost us an extra <span class="math-container">$20$</span> sample points.)
In general, when the area under the graph <span class="math-container">$\theta$</span> is a sizable fraction of the box area <span class="math-container">$A$</span>, the estimated variance will not change much when <span class="math-container">$\theta$</span> changes, so it's usually the case that the estimated variance is accurate. When <span class="math-container">$\theta/A$</span> is small, a better (more efficient) form of Monte-Carlo estimation is advisable.
|
<blockquote>
Implement an estimator using Monte Carlo integration of
$$\theta=\int\limits_0^1e^{-x^2}(1-x)dx$$
</blockquote>
While you can use a $\mathcal{U}([0,1])$ distribution for your Monte Carlo experiment, the fact that both $$x \longrightarrow \exp\{-x^2\}\quad \text{and}\quad x \longrightarrow (1-x)$$ are decreasing functions suggest that a decreasing density would work better. For instance, a <em>truncated</em> Normal $\mathcal{N}^1_0(0,.5)$ distribution could be used:
\begin{align*}\theta&=\int\limits_0^1e^{-x^2}(1-x)\,\text{d}x\\&=[\Phi(\sqrt{2})-\Phi(0)]\sqrt{2\pi\frac{1}{2}}\int\limits_0^1\frac{1}{\Phi(\sqrt{2})-\Phi(0)}\dfrac{e^{-x^2/2\frac{1}{2}}}{\sqrt{2\pi\frac{1}{2}}}(1-x)\,\text{d}x\\&=[\Phi(\sqrt{2})-\Phi(0)]\sqrt{\pi}\int\limits_0^1\frac{1}{\Phi(\sqrt{2})-\Phi(0)}\dfrac{e^{-x^2}}{\sqrt{\pi}}(1-x)\,\text{d}x\end{align*}
which leads to the implementation
<pre><code>n=1e8
U=runif(n)
#inverse cdf simulation
X=qnorm(U*pnorm(sqrt(2))+(1-U)*pnorm(0))/sqrt(2)
X=(pnorm(sqrt(2))-pnorm(0))*sqrt(pi)*(1-X)
mean(X)
sqrt(var(X)/n)
</code></pre>
with the result
<pre><code>> mean(X)
[1] 0.4307648
> sqrt(var(X)/n)
[1] 2.039857e-05
</code></pre>
fairly close to the true value
<pre><code>> integrate(function(x) exp(-x^2)*(1-x),0,1)
0.4307639 with absolute error < 4.8e-15
</code></pre>
Another representation of the same integral is to use instead the distribution with density$$f(x)=2(1-x)\mathbb{I}{[0,1]}(x)$$and cdf $F(x)=1-(1-x)^2$ over $[0,1]$. The associated estimation is derived as follows:
<pre><code>> x=exp(-sqrt(runif(n))^2)/2
> mean(x)
[1] 0.4307693
> sqrt(var(x)/n)
[1] 7.369741e-06
</code></pre>
which does better than the truncated normal simulation.
|
https://stats.stackexchange.com
|
47,476
|
[
"https://softwareengineering.stackexchange.com/questions/47476",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14/"
] |
In academia, it's considered cheating if a student copies code/work from someone/somewhere else without giving credit, and tries to pass it off as his/her own.
Should companies make it a requirement for developers to properly credit all <em>non-trivial</em> code and work that they did not produce themselves? Is it useful to do so, or is it simply overkill?
I understand there are various free licenses out there, but if I find stuff I like and actually use, I really feel compelled to give credit via comment in code even if it's not required by the license (or lack thereof one).
|
I'd say this is probably essential. For one thing, the company may need to deal with any license terms and other legal implications - just because it's "free" doesn't mean you can do what you like with it.
However, there may be an exception with example code copied and adapted from reference books. After all, that's basically what that code is there for. Even so, a comment is a good idea - someone may need to go back to the source for bugfixes (e.g. in errata), or for a better understanding of why you used it.
|
I always do. I also link back to the original source. I do this more for reference then to give credit. (So I can go back and see the original authors notes and or updates)
I think its good practice, but totally unenforceable, having a policy in place is almost worthless, as I don't think it will change anyones behavior.
|
https://softwareengineering.stackexchange.com
|
20,922
|
[
"https://electronics.stackexchange.com/questions/20922",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/5651/"
] |
I'm currently working on a project involving (I think 3mm, 1.5VDC) Infrared LEDs. However, due to my photoresistor, I think the current, voltage, or whatever (forgot) will vary greatly to minuscule amounts. So, do these LEDs need UVLOs? They are very, very sensitive and I've already wasted half the pack.
|
No.
LEDs are not damaged by low voltages.
If you are damaging LEDs, you must be driving them beyond their rated currents. Show your circuit to receive advice.
In general, very few electronic components are damaged by undervoltage. Some microprocessors can mis-execute in a brownout condition, which could have undesirable effects depending on the application. And Li-Ion batteries should not be discharged to too low a voltage.
|
LEDs cannot be damaged by "forward" voltages that are so low that they do not draw rated current.
They <strong>can</strong> be damaged by voltages that are low by normal standards.<br>
An infrared LED may easily be destroyed by a 3V3 or 5V power supply if current in excess of its maximum rated current flows.
LEDs are intended to be driven by either a constant current source of by a voltage source and a resistor such that. In both cases, maximum current is less than rated current.
In your circuit, worst case current must <strong>NEVER</strong> be able to exced maximum rated value.
LEDs may be damaged by reverse polarity voltage. Current drawn will be small, even when there is enough voltage to kill the LED.
Many LEDs are prone to electrostatic damage from "static electricity". Handling LEDs without wearing an earth strap or taking equivalent precautions may be enough.
|
https://electronics.stackexchange.com
|
336,231
|
[
"https://physics.stackexchange.com/questions/336231",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/120802/"
] |
If you (hypothetically) had an infinitely cold ice cube (an ice cube that stays at absolute zero no matter how much heat it absorbs), how long would it take for the Universe to cool down to absolute zero?
|
There is no such thing as a infinity cold ice cube.
The closest scenario I can think of is a system with a heat sink; a system coupled to a very large heat reservoir. You can them solve a heat equation.
You should also take into account that only at
$$t\rightarrow \infty $$
will the temperature of the system equal that of the reservoir, so there is no definite period of time thats answers the equation.
Instead you can ask what the characteristic time of cooling is (i.e. when it will equal 1/e of the initial temperature) or you can ask when it will reach some threshold temperature (i.e. 0.001 of the initial temp).
|
Absolute zero means that the particles aren't moving at all. Any thermal energy will be absorbed by the cube. If we assume diffusion as the only method of heat transfer, there must be a direct chain of mass from the cube to all other objects in the universe for it to absorb energy from all of them. The second there is vacuum, diffusion can no longer occur. Hence assuming an ideal heat sink, it would just reduce everything in Earth to absolute zero.
It's also true that diffusion isn't the only method of heat transfer. Electromagnetic waves and radiation can travel through vacuum and can transfer heat, but this would have little to no effect. These forms of energy are reaching us already, and the amount of energy from other objects in the universe that reaches Earth is miniscule. The ice cube would not increase what arrives.
Hence, it would only have an effect on Earth.
|
https://physics.stackexchange.com
|
413,788
|
[
"https://electronics.stackexchange.com/questions/413788",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/194133/"
] |
I am having a difficultly with producing <span class="math-container">\$S, P, Q \$</span> and <span class="math-container">\$D\$</span> from Instantaneous Power <span class="math-container">\$p(t)\$</span>.
Let's say that both voltage and current are clear sine waves. Then:
<span class="math-container">\$p(t) = Vsin(ωt+φ_V) \cdot Isin(ωt+φ_I)\$</span>
and by using the identity
<span class="math-container">\$sinA sinB = \frac{1}{2} (cos(A-B) - cos(A+B)) \$</span>
instantaneous power can be written as
<span class="math-container">\$p(t) = \frac{VI}{2}cos(φ_V-φ_I) - \frac{VI}{2}cos(2ωt+φ_V+φ_I)\$</span>
Now, it can be proved that at a pure sine wave <span class="math-container">\$A_{RMS} = \frac{A_{peak}}{\sqrt2} \$</span>, hence:
<span class="math-container">\$P = V_{RMS}I_{RMS} \cdot cos(φ_V-φ_I)\$</span>.
How to prove that <span class="math-container">\$Q = V_{RMS}I_{RMS}\cdot sin(φ_V-φ_I)\$</span> and <span class="math-container">\$S = V_{RMS}I_{RMS}\$</span>?
Moreover, how to prove that
<span class="math-container">\$P = \sum_{k=1}^\infty V_{RMS}I_{RMS} \cdot cos(φ_V-φ_I) \\ Q=\sum_{k=1}^\infty V_{RMS}I_{RMS} \cdot sin(φ_V-φ_I) \\ D = \sqrt{\sum_{k\ne j}^{\infty} U_k^2I_j^2 +U_j^2I_K^2 - 2 U_kI_kU_jI_jcos(φ_k-φ_j)}\$</span>
when voltage and current are not sines, but arbitrary, periodic waveforms?
I don't expect from anyone to provide me with the full proof, but to show me the right direction.
<hr>
EDIT:
In order to prove that <span class="math-container">\$Q = V_{RMS}I_{RMS}\cdot sin(φ_V-φ_I)\$</span> - for pure sine waves - one must analyze the current to two components. One that's in phase with the voltage and one that's <span class="math-container">\$\pm90^o \$</span> out of phase. By drawing the phasors <span class="math-container">\$\vec{V}, \vec{I_X}\$</span> and <span class="math-container">\$ \vec{I_Y}\$</span> this becomes obvious.
|
I think I found the answer I am looking for. I am presenting it here, so it will be available to anyone that is interested.
First consider:
<span class="math-container">\$v(t) = V_1cos(ω_1t+φ_{V_1}) \\ i(t)=I_1cos(ω_1t+φ_{I_1})\$</span>
then:
<span class="math-container">\$p(t) = v(t)\cdot i(t) = \\V_1I_1cos(ω_1t+φ_{V_1})cos(ω_1t+φ_{I_1}) = \\\frac{1}{2}V_1I_1(cos(φ_{V_1}-φ_{I_1})+cos(2ω_1t+φ_{V_1}+φ_{I_1})) \$</span>
Let's define <span class="math-container">\$θ = φ_{V_1}-φ_{I_1}\$</span>, hence <span class="math-container">\$φ_{V_1}+φ_{I_1} = 2φ_{V_1}-θ\$</span> and the above can be written as:
<span class="math-container">\$p(t) =\frac{1}{2}V_1I_1(cos(θ) + cos(2ω_1t+2φ_{V_1}-θ))\$</span>, but
<span class="math-container">\$cos(2ω_1t+2φ_{V_1}-θ) = cos(2ω_1t+2φ_{V_1})cos(θ)+sin(2ω_1t+2φ_{V_1})sin(θ)\$</span> and
<span class="math-container">\$\frac{1}{2}V_1I_1 = \frac{V_1}{\sqrt(2)}\frac{I_1}{\sqrt(2)}= \tilde{V_1}\tilde{I_1}\$</span>
Finally:
<span class="math-container">\$p(t) = \tilde{V_1}\tilde{I_1}cos(θ)(1+cos(2ω_1t+2φ_{V_1})) + \tilde{V_1}\tilde{I_1}sin(θ)sin(2ω_1t+2φ_{V_1}) \Rightarrow \\ p(t) =P(1+cos(2ω_1t+2φ_{V_1}))+Qsin(2ω_1t+2φ_{V_1})\$</span>
It is clear that one component of the instantaneous power is always positive (P) and the other one oscillates back and forth, with mean value equal to zero (Q). Both components have double the frequency of the initial signals and an initial phase.
Now the fun begins...
<hr>
Consider
<span class="math-container">\$v(t) = \sum_{n=1}^\infty V_ncos(nω_1t+φ_{V_n}) \\ i(t)=\sum_{n=1}^\infty I_ncos(nω_1t+φ_{I_n})\$</span>
<span class="math-container">\$p(t) = \\ V_1I_1cos(ω_1t+φ_{V_1})cos(ω_1t+φ_{I_1}) +V_2I_2cos(2ω_1t+φ_{V_2})cos(2ω_1t+φ_{I_2})+... \\
V_1I_2cos(ω_1t+φ_{V_1})cos(2ω_1t+φ_{I_2}) +
V_1I_3cos(ω_1t+φ_{V_1})cos(3ω_1t+φ_{I_3}) + ... \\
V_2I_1cos(2ω_1t+φ_{V_2})cos(ω_1t+φ_{I_1}) + ...\$</span>
Hence, it is possible to write instantaneous power as:
<span class="math-container">\$p(t) = \sum_{n=1}^\infty V_nI_ncos(nω_1t+φ_{V_n})cos(nω_1t+φ_{I_n})+\sum_{j\ne k} [V_jI_kcos(jω_1t+φ_{V_j})cos(kω_1t+φ_{I_k})+V_kI_jcos(kω_1t+φ_{V_k})cos(jω_1t+φ_{I_j})]\$</span>
As proved above, the first sum provides P and Q as:
<span class="math-container">\$P = \sum_{k=1}^\infty \tilde{V_k}\tilde{I_k}cos(θ)\\Q = \sum_{k=1}^\infty \tilde{V_k}\tilde{I_k}sin(θ)\$</span>
The second sum is what is called Distortion Power and it can be calculated as:
<span class="math-container">\$D=\sqrt{S^2-P^2-Q^2}\$</span>, where:
<span class="math-container">\$S^2 = \sum_{k=1}^\infty[{V_k^2}{I_k^2}]+\sum_{k\ne j}[V_k^2I_j^2+V_j^2I_k^2] \\
P^2 = \sum_{k=1}^\infty[V_k^2I_k^2cos^2(φ_k)]+\\
V_1I_1V_2I_2cos(φ_1)cos(φ_2)+V_1I_1V_3I_3cos(φ_1)cos(φ_3)+... \\
Q^2 = \sum_{k=1}^\infty[V_k^2I_k^2sin^2(φ_k)]+\\
V_1I_1V_2I_2sin(φ_1)sin(φ_2)+V_1I_1V_3I_3sin(φ_1)sin(φ_3)+...\$</span>
Hence
<span class="math-container">\$P^2+Q^2 = \sum_{k=1}^\infty[V_k^2I_k^2(cos^2(φ_k)+sin^2(φ_k))]-\\
V_1I_1V_2I_2(cos(φ_1)cos(φ_2)+sin(φ_1)sin(φ_2))-... = \\
\sum_{k=1}^\infty[V_k^2I_k^2]-\sum_{k\ne j}2V_kI_kV_jI_jcos(φ_k-φ_j)\$</span>
Last step is:
<span class="math-container">\$D=\sqrt{\sum_{k\ne j}V_k^2I_j^2+V_j^2I_k^2-2V_kI_kV_jI_jcos(φ_k-φ_j)}\$</span>
I skipped some calculations, but they are easy to make.
|
Starting with the <strong>definition</strong> for active power
<span class="math-container">\$P = \frac{1}{T} \int_0^T v(t) \cdot i(t) \cdot dt = \frac{1}{T} \int_0^T p(t) \cdot dt\$</span>
one gets the following expression for the power for linear networks (containing R, L, C, all constant):
<span class="math-container">\$P = \sum_{k=1}^\infty V_{k,RMS} \cdot I_{k,RMS} \cdot cos(φ_{V,k}-φ_{I,k})\$</span>
For a single (pure) sinewave there is
<span class="math-container">\$P = V_{RMS} \cdot I_{RMS} \cdot cos(φ_{V}-φ_{I})\$</span>
Above expressions have the meaning of power consumed (e.g. in form of heat).
Now there is the <strong>definition</strong> of reactive power as
<span class="math-container">\$Q = \sum_{k=1}^\infty V_{k,RMS} \cdot I_{k,RMS} \cdot sin(φ_{V,k}-φ_{I,k})\$</span>
The important point is that this is a <strong><em>definition</em></strong>, not something derived. In case of linear networks it has physical meaning, in case of nonlinear networks (containing switches, diodes, nonlinear inductors ...) is has not.
Same is true for the apparent power which is also a <strong>definition</strong> which cannot be derived, and which is defined as
<span class="math-container">\$S = V_{RMS} \cdot I_{RMS}\$</span>
It allows to compare size of rotating machines and/or transformers in a grid (of fixed frequency e.g. 0 Hz or 60 Hz) which is very useful for the power engineer.
For linear networks one can easily show from the equations (definitions) above that
<span class="math-container">\$S^2 = P^2 + Q^2\$</span>
In case of nonlinear networks the above equation does not work and another power term is added as
<span class="math-container">\$S^2 = P^2 + Q^2 + D^2\$</span>
This gives the <strong>definition</strong> of D as
<span class="math-container">\$D = \sqrt {S^2 - P^2 - Q^2}\$</span>
I found this argumentation in my old university textbook (Franz Zach, 'Leistungselektronik: Bauelemente, Leistungskreise, Steuerungskreise, Beeinflussungen', ISBN-10: 3709144590, Springer, Oct. 1980).
Short answer:
<ul>
<li>Instantaneous power p(t) results in P which is easy to show for linear networks. </li>
<li>Q is defined, not derived from p(t), with the background of linear networks. </li>
<li>S is defined, not derived from p(t), with the background of electrical machines in the grid. </li>
<li>D is defined, not derived from p(t), in order to bring together P, Q, and S in nonlinear networks.</li>
</ul>
|
https://electronics.stackexchange.com
|
395,028
|
[
"https://physics.stackexchange.com/questions/395028",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/185504/"
] |
I´ve searched about this but I cant really find an answer that satisfies me.
What happens that determines " This wave has frequency value equal to x Hz and a value of wavelength equal to y "
I cant seem to understand this... Does the wavelength has anything to do with length is there something moving up and down and each cycle is a complete wave ?
I´m very confused especially when I know that the wave analysis that we see in textbooks is a mathematical analysis and then I hear stuff like " The frequency of a wave is related to it´s energy"
And i get even more confused when I hear that "The wavelength of a wave can stop a wave from passing trough a hole with a diameter equal to x"
I can´t connect the mathematical side to physical side.
|
<blockquote>
Does the wavelength has anything to do with length is there something moving up and down and each cycle is a complete wave
</blockquote>
In classical EM, the answer is that the wavelength is the distance between the peak voltages in the electric field.
When you think of a water wave, it's easy to see the peak because its sticking up out of the water.
But consider a sound wave in metal... there's no physical peak in the same terms. In this case, the wavelength is the distance between the peak tension in the metal as the sound squeezes and stretches it. There's physical motion at the microscopic level, but you don't see it.
Classical EM works in the same fashion, it's the internal "tension" of the electric (or magnetic) field that you're measuring. Of course that begs the question what this field is that you're working on, but that's another question (or non-question, I guess)...
When this field passes a conductive material, the basic pattern of the field can be seen in the motion of the electrons in the conductor. That's how antennas create a signal that electronics can amplify. You won't see that either, but this is more directly like the sound case.
<blockquote>
The frequency of a wave is related to it´s energy
</blockquote>
Now that's something different. This statement is not true in classical EM, this is something that arises in QM. If the book does not make that clear, the book is crap.
<blockquote>
The wavelength of a wave can stop a wave from passing trough a hole with a diameter equal to x
</blockquote>
You're lucky this is so, otherwise you'd fry yourself looking through the holes in the metal plate in the door of a microwave!
So the reason for this is very complex, but it all boils down to the hole having to be in a material that interacts with the EM. So it's a metal plate in the door, not a plastic one.
Think about the old-school TV antennas you see, sadly hanging on rusting towers. Ever notice they always consist of a series of horizontal metal poles, with a clearly defined distance between them?
What happens is that any time you have a current in a conductor it radiates EM. So when a radio (TV) signal goes past the antenna it picks up that pattern and then <em>rebroadcasts it</em>. The strength of that signal is maximized if the rods, or <em>elements</em>, are a resonant length - which is why you have different lengths in a TV antenna to pick up different channels.
If you <em>space</em> the metal rods properly, that secondary signal will arrive at a given location exactly one wavelength after the "original" signal, and the two will constructively interfere. Put a bunch in a row and you amplify the signal. If you look closely at one of these antennas, you'll note the wires going to the TV actually only connect to one of the sets of rods, the others are passively adding to the signal through this interference process.
I hope that makes sense...
The same thing is happening in the plate, sort of. In this case the responding signal interferes <em>destructively</em> with the original signal on the back, and constructively on the front. The result is basically a mirror. But this only works when the holes are smaller than the wavelength and the plate has the right thickness. Microwaves are around 2.4 cm, so a hole that's a couple of mm is effectively like no hole at all. Light from the bulb inside is millions of times smaller, so it gets through no problem.
Handy, no?
Another example: have you noticed modern TV antennas don't look like the old-school ones? They have an X shaped <em>active element</em> at the front, and then a bunch of rods behind them. The rods are spaced close together, compared to the ~2m long signals they may as well be a solid sheet of metal. Yet the rods have a crapload less wind drag than a solid sheet.
Handy, no?
It's also much more complex than that, and you need to use all sorts of expansions and such to calculate it, but this is the basic idea.
|
My understanding is that what is moving up and down is the strength of the electric field at a particular point in space. EM radiation is generated by the mechanical vibration of charged particles. Remember a charged particle casts an electric field that decays as 1/r^2. When a charged particle moves, say in the +x direction, all of the points in space have to acquire new field values to reflect the new distance from the charged particle, but it takes time for the "information" that the particle has moved to reach these points in space. Along the x axis, as the charged particle vibrates back and forth, the electric field at points on the x axis is alternately increased and decreased as electromagnetic pulses propagating at the speed of light pass through points on the axis. At any particular point, the field will be alternately high and low. This is the EM wave and the frequency is based on the vibrational frequency of the charged particle. It is -somewhat- similar to splashing your hands in a pool and the frequency of the water waves correspond to the frequency of your splashing. For me it seems intuitive that the faster you splash or vibrate the particle, the more energy is being used to reorganize the electric field.
Einstein also predicted a similar effect with gravity waves which have now been measured with the motion of black holes.
|
https://physics.stackexchange.com
|
8,423
|
[
"https://cardano.stackexchange.com/questions/8423",
"https://cardano.stackexchange.com",
"https://cardano.stackexchange.com/users/5844/"
] |
I have a Daedalus wallet running which is a full node so in theory, I should get all the CLI functionality from that one install, how do I access it?
|
Daedalus has its own <code>cardano-node</code> instance, so you can specify the node's socket variable and use it for <code>cardano-cli</code> purposes.
First, launch Daedalus, and click on Help > Daedalus Diagnostics. Under the "Core Info" section, the "Daedalus State Directory" specifies the filepath that Daedalus uses on your computer. There should be a socket variable (likely named <code>cardano-node.socket</code>) in this directory which you can point to in your <code>bashrc</code> file.
In your CLI, run: <code>nano ~/.bashrc</code>
Now, scroll down and add the following line to the bashrc file:
<code>export CARDANO_NODE_SOCKET_PATH=<PATH_TO_SOCKET_IN_DAEDALUS_STATE_DIRECTORY></code>
Exit the bashrc file and run: <code>source ~/.bashrc</code>
Make sure cardano-cli is installed and is in your $PATH. You should now be able to run <code>cardano-cli</code> commands using Daedalus' <code>cardano-node</code> instance.
|
Go to the Deadalus menu, and you should see a menu item that says "Open a Marlowe terminal", and a terminal will open with Marlowe Cli installed.
|
https://cardano.stackexchange.com
|
139,677
|
[
"https://mathoverflow.net/questions/139677",
"https://mathoverflow.net",
"https://mathoverflow.net/users/8133/"
] |
DISCLAIMER: All pointclasses considered here are boldface.
Most of the time, when doing descriptive set theory, we want the projective sets to "behave well;" for example, maybe we don't want there to be nonmeasurable projective sets, or projective well orderings of $\mathbb{R}$, etc. Generally, this means making some (fairly conservative) large cardinal assumption, or equivalent.
At the far opposite end of things is the axiom that all sets are constructible, $V=L$. This axiom implies that there is a projective - in fact, $\Delta^1_2$ - well-ordering of the reals, and so projective sets become bad very early in the hierarchy.
My question is about the state of affairs when $V=L$ holds. My motivation is simply that I don't feel I have a good grasp on basic concepts in descriptive set theory, and the following seemed like a good test problem to assign myself; but I have thought about it for a while without making progress, so I'm asking here:
Let $\oplus$ be one of the usual pairing operators on $\omega^\omega$. For the purposes of this question, we say that a pointclass $\Gamma\subseteq \mathcal{P}(\omega^\omega)$ has the uniformization property if whenever $A\in \Gamma$, there is some $B\in \Gamma$ such that:
<ul>
<li>$B\subseteq A$, and</li>
<li>Whenever $x\oplus y\in A$, there is a unique $z$ such that $x\oplus z\in B$.</li>
</ul>
That is, we view $A$ as coding a relation on $\omega^\omega\times \omega^\omega$, and $B$ is the graph of a function contained in $A$. (This is not usually how uniformization is presented, but it's equivalent for all intents and purposes.) My question is then:
<blockquote>
Assume $V=L$. Let $D$ be the set of (boldface) $\Delta^1_2$ elements of $\omega^\omega$; does $D$ have the uniformization property?
</blockquote>
Now, it seems clear to me that $D$ should <strong>not</strong> have the uniformization property. [EDIT: As Joel's answer below shows, this is completely wrong.] The counterexample should be just the $\Delta^1_2$ well-ordering $\prec$ given by the assumption that $V=L$: uniformizing $\prec$ requires us to choose, for each real $r$, a real $s$ such that $r\prec s$; and although $\prec$ is $\Delta^1_2$, the usual way of doing this - choosing the immediate $\prec$-successor of $r$ - is no longer $\Delta^1_2$.
However, I don't know how to show that $\prec$ - or any other $\Delta^1_2$ set - cannot be uniformized in $\Delta^1_2$. I suspect I'm just missing something fairly simple.
<hr>
Note: it is known that the boldface pointclasses $\Pi^1_1$ and $\Sigma^1_2$ have the uniformization property, and assuming large cardinals, the uniformization property can be further propagated to every pointclass $\Pi^1_{2n+1}$, $\Sigma^1_{2n}$. On the other hand, the class $\Delta^1_1$ of Borel sets lacks the uniformization property, provably in $ZFC$.
|
Unless I am mistaken, it seems to me that $\Delta^1_2$ does have the uniformization property in $L$.
For any set $A$ in $\Delta^1_2$, let $B$ select the $L$-least witness on each slice. So $B$ unifomizes $A$, and the graph of $B$ appears to be $\Delta^1_2$, by the following reasoning:
<ul>
<li>$x\oplus z\in B$ if and only if it is in $A$, and for every well-founded countable model $M$ of $V=L$ containing $x$ and $z$, if $y$ is a real in $M$ preceding $z$, then $x\oplus y\notin A$.</li>
<li>$x\oplus z\in B$ if and only if it is in $A$, and there is a well-founded countable model $M$ of $V=L$ containing $x$ and $z$, if $y$ is a real in $M$ preceding $z$, then $x\oplus y\notin A$. </li>
</ul>
The point here is that the countable well-founded models are correct about the $L$-predecesors of the reals that they can see. So we can use any or all of them when verifying that $z$ is least such that some $\Delta^1_2$ property holds. Note that the "for every real $y$ in $M$" is merely a natural number quantifier, since $M$ is coded as a countable structure. So the first of these characterizations is $\Pi^1_2$ and the second is $\Sigma^1_2$, and so it is $\Delta^1_2$ overall.
|
Assuming $V=L$ then we have $AC$ and $CH$, so every set of reals is at most $\aleph_1$ Suslin. So we can find scales for them and uniformize them.In particular every $\Delta^1_2$ set of reals can be uniformized. As Joel said in the comment above this works for all $\Delta^1_n$ under $V=L$.
|
https://mathoverflow.net
|
45,449
|
[
"https://mathoverflow.net/questions/45449",
"https://mathoverflow.net",
"https://mathoverflow.net/users/756/"
] |
<strong>SymMonCat</strong> is the cartesian 2-category of symmetric monoidal categories, braided monoidal functors, and monoidal natural transformations. The terminal symmetric monoidal category <strong>1</strong> has one object $I$ and $I \otimes I = I$.
A category enriched over a monoidal category $V$ assigns to each pair of objects $X, Y$ an object hom$(X,Y)$ in $V$ and to each object $X$ a morphism $id_X:I \to \mbox{hom}(X,X)$ in $V$.
When $V = $ <strong>SymMonCat</strong>, the morphism $id_X:1 \to \mbox{hom}(X,X)$ is a braided monoidal functor; since monoidal functors preserve the monoidal unit and tensor product, it must map the unit $I$ in <strong>1</strong> to the unit $I$ in hom$(X,X)$.
Is there a different way of enriching over <strong>SymMonCat</strong> such that $id_X$ does not pick out the monoidal unit (other than considering it a subcategory of <strong>Cat</strong>)?
|
[Ignore this first part, I'm just leaving it for the context to the comments below.]
It is hard for me to understand why you would want to enrich in symmetric monoidal categories, have an identity, and also want this identity to <em>not</em> be the unit of the symmetric monoidal category.
That said, you can always do away with units altogether and consider "enriched categories without identities". Is this what you are after?
<hr>
After Mike's example I am now on board. What you probably want to do is enrich over the symmetric monoidal 2-category of symmetric monoidal categories where the monoidal structure is the "tensor product of symmetric monoidal categories". What is this you ask?
The functor category between two symmetric monoidal categories $Fun^\otimes(B,C)$ is naturally equipped with a symmetric monoidal structure (using pointwise multiplication). The tensor product of symmetric monoidal categories is $(-) \otimes B$ is the (weak) left adjoint to the functor $Fun(B, -)$. Thus $A \otimes B$ is a symmetric monoidal category such that symmetirc monoidal functors from it to C are the same as "bilinear" functors $A \times B \to C$. Now the monoidal unit for this tensor product is the free symmetric monoidal category on one object $\mathbb{F}$ (which is the category of finite sets and permutations).
In this way, if you enrich in (SymCat, $\otimes$) you get a unit being a functor $ \mathbb{F} \to Hom(a,a)$, which is equivalent to just some element, not necessarily the unit object of $Hom(a,a)$.
The prototypical example is the 2-category of symmetric monoidal categories itself.
|
I suppose that the monoidal structure for $ SymMonCat$ you mean is the cartesian one, and as braided functor you mean a pseudo-monidal symmetrical (funtors that commutes with the canonical isomorphism of symmetry, and the coherence morphism data are isomorphisms). Then $\mathcal{C}(X, X)\in SymMonCat$ has a internal symmetric monoidal structure and another monoidal structure given by monoidal moltiplication $\mathcal{C}(X, X)\times\mathcal{C}(X, X)\to \mathcal{C}(X, X)$ with the realtive axioms related to a $SymMonCat$-enriched category.
Then $\mathcal{C}(X, X)$ is a bimomoidal category, and the two “monidal Identity" for the two monoidal structures are $I_X$ (for the internal monoidal structure) and the (essential) image of the morphism you called $id_X$, and a very reduced example of this (de-categorification) is a $rig$, a algebraic structure by two monoidal law, one abelian. In general the two units are different (trivial example $(\mathbb{N}, +, 0 ; \times , 1)$.
|
https://mathoverflow.net
|
74,800
|
[
"https://cs.stackexchange.com/questions/74800",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/55524/"
] |
We define $\mathbf P$ as the set of problems solvable in polynomial time. We define $\mathbf{NP}$ as the set of problems with a verifier $ \in \mathbf P$.
Is there a name for problems whose verifiers are $\in \mathbf {NP}$ (e.g., $\mathbf{N(NP)}$)? I can't see this being a very <em>useful</em> complexity class, but, for example we have that $\mathbf{NP} \neq \mathbf{N(NP)} \implies \mathbf{P} \neq \mathbf{NP}$, so it might be an area of research for that reason alone.
|
Suppose you had a problem such that for any $x \in L$, there was a verifier $v$ such that $v$ could be checked against $x$ by a nondeterministic polynomial time algorithm. For a valid $(x, v)$ pair, there is some verifier $v'$ such that it takes polynomial time to check $((x, v), v')$ is a correct verification.
But then, you could simply combine $(v, v')$ into a single witness, and run the check in deterministic polynomial time.
Thus, the class ${\bf N(NP)}$ you describe is really just equal to ${\bf NP}$.
|
If a problem has a verifier you can guess it with nondeterministic TM so it is automatically in NP.
|
https://cs.stackexchange.com
|
28,061
|
[
"https://quant.stackexchange.com/questions/28061",
"https://quant.stackexchange.com",
"https://quant.stackexchange.com/users/5641/"
] |
I'm self studying for an actuarial exam and I am curious about a property of the antithetic variate method for increasing the Monte Carlo price accuracy (i.e. For every random draw of $z$, also include a draw of $-z$ in the simulation).
<strong>Question:</strong>
Assume the Black-Scholes framework and consider a European call option with strike $K$ expiring in $T$ years on a non-dividend paying stock currently priced at $S_0$ with an annual volatility $\sigma$. Suppose that a Monte Carlo simulation is used to estimate the expected value at expiration of the option.
The simulation was performed using $n$ draws $u_1, u_2, ..., u_n$ from a uniform distribution to generate the stock price. Suppose that each of these draws generates a stock price at expiration which gives a zero payoff for the call option and therefore $E(\text{Payoff}) = \frac{1}{n} \sum_{i = 1}^n C(S_T^i, K, T) = 0$, where $S_T^i$ is the stock price at expiration for the $i$th draw.
Using the same uniform draws, and applying the antithetic variate method, will $E(\text{Payoff}) = \frac{1}{2n} \sum_{i = 1}^{2n} C(S_T^i, K, T) > 0$ necessarily?
My intuition says yes, but I don't have a way of convincing myself why.
|
No, you can have
$$
\frac{1}{2n}\sum_{i=1}^{2n} C(S^i_T,K,T) = 0
$$
First off, there's the obvious case where $n=1$ and $u_1 = 0.5$
More generally, for options way out of the money it is common to have
$$
\frac{1}{n}\sum_{i=1}^{n} C(S^i_T,K,T) = 0
$$
even for very large $n$. Antithetic sampling does not change that.
|
No. Antithetic variable method is usually for generating smaller standard error than your non-antithetic method, which is a direct result of the negative correlation between original variable and the antithetic variable.
For OTM option, there definitely will be a lot of path ending up with value 0. What may be a choice is to use importance sampling.
Write out the expectation under RN measure and manually extract another normal density with higher mean (in this case). Then use Monte Carlo method to get this new expectation. You can certainly apply antithetic method to it also to reduce the SE.
|
https://quant.stackexchange.com
|
1,118,226
|
[
"https://math.stackexchange.com/questions/1118226",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/203535/"
] |
I read that for $y=ax^2+bx+c$ is a quadratic function where $a\neq0$, but is it true that $a$ really can't be zero? I think it is because if $a$ <strong>was</strong> zero, there wouldn't be a parabola. There would just be a flat line, so then it wouldn't be quadratic because the $x^2$-term indicates if the parabola opens upward or downward. Is this right or is it true about what I asked?
|
If $a=0$, you no longer have a parabola.
Instead, you have a line: $y = bx+c$, with slope equal to $b$, and a $y$-intercept at $c$.
|
I guess I'm right. If $a$ is zero, then $y=ax^2+bx+c$ would change to $y=bx+c$, leaving $b$ as the slope and $c$ as the y-intercept, leaving a flat line, not a parabola.
|
https://math.stackexchange.com
|
316,555
|
[
"https://softwareengineering.stackexchange.com/questions/316555",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/113314/"
] |
In C++ I frequently see these two signatures used seemingly interchangeably:
<pre><code>void fill_array(Array<Type>* array_to_fill);
Array<Type>* filled_array();
</code></pre>
I imagine there is a subtle difference, but I don't know what it is.
Could someone explain when I might prefer one form over the other?
|
The first kind of signature is usually preferable.
The difference is that the second signature requires the array to be created inside the function. In particular, the second signature effectively requires the array to <strong>outlive the scope in which it was created</strong>. So what we're really comparing are these two snippets:
<pre><code>function foo1() {
Array<Type>* array = /* allocate memory and call constructors */
fill_array(array);
do_stuff_with_array(array);
/* free memory and call destructors */
}
function foo2() {
Array<Type>* array = filled_array(); /* allocation/constructors happen elsewhere */
do_stuff_with_array(array);
/* free memory and call destructors */
}
</code></pre>
The second version is potentially problematic for a few reasons:
<ol>
<li><strong>It's error prone.</strong> Functions which return pointers or references to something they created are very easy to get wrong, either in the form of undefined behavior or in the form of a completely unnecessary performance loss. Since you're working with raw pointers, it's easy to invoke undefined behavior by returning a pointer to a local variable that's no longer valid after the function has returned. If the array was being passed around as a regular object or a reference instead, you might suffer an expensive copy when <code>filled_array()</code> returns (the details of when this may or may not happen are complicated, see StackOverflow for all the gory details).</li>
<li><strong>You don't know how <code>filled_array()</code> allocated the memory for the array</strong>, so in principle you don't know how to deallocate it correctly. You may be able to get away with assuming it was allocated "normally", but if you don't control the allocation yourself, you just don't know for sure. It's possible some custom allocator was used, and it's also possible the pointer was saved somewhere so that a totally different function can do the deallocation at a specific time later in the pipeline (I believe this is common in C libraries). While a function that accepts a pointer as an argument could theoretically do this, it's far less likely.</li>
<li><strong>Memory/object reuse.</strong> What if I already have memory allocated for an Array, or an actual Array, when it comes time to call <code>filled_array()</code>? Unfortunately, <code>filled_array()</code> controls both the memory allocation and the value generation logic, so it's going to allocate more memory whether or not you need it. If you have many functions like this in a row, you're potentially wasting a huge amount of time and memory on allocations that could be completely skipped if you instead accepted pointers or references to memory controlled by client code. Or more concisely: Avoid writing functions that decide how memory should be managed <em>and</em> do something else with that same memory. Single responsibility principle and all that.</li>
</ol>
Of course, you <em>should</em> be passing the Array around by reference rather than pointer. And you should be using RAII objects (whether that means "just an Array" or a smart pointer to an Array) as much as possible so that all the allocation and deallocation is managed for you. But these arguments for creating the object at the correct scope still apply, since switching to references and RAII objects alone may only change correctness bugs into performance "bugs" (some of which move semantics can't automagically fix).
|
It seems most likely that the second one returns <code>Array<Type></code> and not <code>Array<Type>*</code>. In the first case, there is an <code>Array<Type></code> somewhere and you pass a pointer to it, so the function can fill it. In the second case, the function creates an object and returns it (unless the type is <code>Array<Type>*</code> and I don't know what's going on).
If you use constructors with rvalue reference, the second is just as efficient but much more readable. Without rvalue references (older C++ versions) the first version was much more efficient, because the second needs to create a completely pointless copy of the value returned.
Clarification: I'm assuming that the second function returns an Array, not a pointer to an Array. Returning a pointer to an array would mean there must be an Array somewhere. So where is it? It doesn't make sense. But if the function returns an Array, that Array would most likely be created as a local variable inside the function. Then an assignment constructor is called to <strong>copy</strong> it to an object inside the calling function, and the original is destroyed. And there is the copy that you can avoid in new C++ versions.
|
https://softwareengineering.stackexchange.com
|
156,874
|
[
"https://dba.stackexchange.com/questions/156874",
"https://dba.stackexchange.com",
"https://dba.stackexchange.com/users/111846/"
] |
My question is regarding database design/architecture, but I'll use a familiar example to explain it.
Suppose there is a database for banking. This database has a table called <code>Customers</code> which stores <code>ID</code>, <code>Name</code>, <code>Address</code>, etc.. Now each of these customers can have their own sub-table for <code>Transactions</code>, with columns like <code>Transaction ID</code>, <code>Date</code>, <code>Time</code>, <code>Amount</code>, etc.
My question is, what is the efficient way to store this <code>Transactions</code> table. Should I create a new table in the database, add a field <code>User ID</code>, and insert transactions of ALL customers in this table. Or should I add a text field <code>Transactions</code> to the table Customers and store all transactions of each user in JSON format? I'd like to know how this is done in professional industries.
|
The following should help with the execution time:
<ul>
<li>remove the <code>ORDER BY</code> if it isn't strictly necessary</li>
<li>replace the join of <code>dr</code> table with <code>WHERE EXISTS (SELECT 1 FROM tms_door_record_raw As dr WHERE c.card_no = dr.card_no AND dr.record_time BETWEEN '2016-11-01' AND '2016-11-02')</code></li>
<li><code>GROUP BY</code>, may now not be needed</li>
<li>widen the index on <code>tms_door_record_raw</code> to include both <code>card_no</code> and <code>record_time</code></li>
</ul>
Test this and see if progress is being made. Further steps may be necessary but hopefully this is in the right direction.
|
Remove the GROUP BY.<br>
If you have (logicaly valid) duplicates remove them at early stage.
|
https://dba.stackexchange.com
|
169,485
|
[
"https://electronics.stackexchange.com/questions/169485",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/32541/"
] |
I'm reading the first chapter of the AoE. I've come across this section on differentiator/integrator circuits and couldn't understand the math behind it.
<img src="https://i.stack.imgur.com/pNe7M.png" alt="differntiator">
<img src="https://i.stack.imgur.com/2cM4z.png" alt="integrator">
For the first picture, it says small RC means dV/dt being much lower than dVin/dt, but I don't understand how it does this. Similarly, I'm not sure how large RC means Vin >> V. I know this may be a petty question, so please be patient with me.
|
What they mean is that a passive R/C filter can only approximate a differentiator/integrator so long as the time constant is much slower than the signal. The reason for this is that the true behavior of an R/C and R/L circuit is exponential in time e.g. from basic circuit theory, the general response of an RC circuit is $$V=V_0 (1-e^{(-t/RC)})$$ If RC is large, e^x is close to linear for small values of t, yielding behavior close to an ideal integrator/differentiator.
Another way to think about it is to consider the integrator case in 1.15 with a constant voltage input (e.g. Vin = 10 V). We expect the output to be a linear ramp of constant slope (integration of a constant = straight line). However, if RC is too small, what happens is that after a time of integration, V will increase due to the capacitor C charging up. This will decrease the current through R, which in turn decreases the "slope" the output voltage. At some point, when V = Vin the integrator stops working completely. This is how the behavior of the passive integrator deviates from the ideal integrator. Conversely if RC is large enough, the voltage across capacitor C will never get large enough to reduce the current through resistor R, and so the current through resistor R will be approximately constant, behaving as an integrator.
Note that you can use op-amps and other active components to force the current through the resistor (in 1.15) to be constant, this is how the "ideal integrator" circuit below works:
<img src="https://i.stack.imgur.com/q8iSJ.jpg" alt="enter image description here">
|
As an alternative, you can watch the phase response of the circuits.
Remember that an ideal differentiating (integrating) process requires a phase shift between input and output of +90deg (-90deg). A passive C-R resp. R-C combination allows these values for infinite frequencies only. Hence, only an approximation of the required operation is possible for relatively large frequencies only (far above 1/RC).
As an active example, the shown active (inverting) integrator enables the required phase shift (with minor errors) within a relatively broad frequency band - however with a value of +90deg (due to the inverting property).
|
https://electronics.stackexchange.com
|
178,582
|
[
"https://physics.stackexchange.com/questions/178582",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/27123/"
] |
I've previously learned that massive particles cannot achieve the speed of light.
But recently I read that, concerning the gels that refract and bounce light within around enough that it can travel at worldly speeds, and by extension how electromagnetism propagates through matter, that pure photons are thought of as interacting with the massive objects, gaining mass and moving at a speed less than the speed of light as a result.
I'm not sure if this is just how the paradigm is set up to frame the phenomena, or if this paradigm carries over seamlessly into other accepted theories elsewhere in science, but this made me think of the possibility of the exclusivity of mass and velocity applies not only to <em>particles with speeds at c being unable to have mass</em>, but also the converse, <em>particles without mass being unable to have speeds less than c</em>.
My question about the properties of all particles, with syntactical brevity:
<blockquote>
Mass exclusive-or speed of light is true?
</blockquote>
|
First, we will look at the energy of a free relativistic particle of (rest) mass $m$ moving with velocity $v$:
$$E = \frac{mc^2}{\sqrt{1-\frac{v^2}{c^2}}}$$
where $E=mc^2$ when $v=0$. We now consider a few cases:
<ol>
<li>$m\ne0$:
In this case, $E\rightarrow\infty$ as $v\rightarrow c$. Therefore, a massive particle that at any point of time is moving at less than the speed of light cannot practically be accelerated to achieve the speed of light. A massive particle that has always existed with $v$ = $c$ however, has infinite energy - this leads to trouble, like infinite transfers of energy to ordinary matter being possible in interactions all over its path - and is therefore considered unphysical.</li>
<li>$m\rightarrow0$:
Here, we demand that the particle has a finite energy (for various reasons such as its being able to produce some measurable effects) in spite of its vanishing mass, and this can only be possible if $v=c$, so that $mc^2 = E\sqrt{1-\frac{v^2}{c^2}} = 0$. In these cases, the energy is often determined by other things. For example, a photon has an energy determined by its frequency $\nu$: $$E = h\nu$$</li>
</ol>
So, at least from an energy point of view, massless particles can only ever travel at the speed of light, and massive particles only at lesser speeds. This conclusion, of course, assumes that the above expression for energy also holds for massless particles. While a potentially suspect assumption, it is ok if we expect a smooth transition between the massless case and the limiting case of very low mass.
|
Every particle needs to have energy to be a particle (if it had none it wouldn't even exist). Since energy is equivalent to mass and therefore gravitates I would say YES, all particles that have a speed less than the speed of light must also have mass.
Because the speed of the particle is less than the speed of light an observer could travel with the same velocity as your particle and experience the particle's rest mass (in contrast to photons, which have energy and gravitate, but have no rest mass).
|
https://physics.stackexchange.com
|
335,218
|
[
"https://softwareengineering.stackexchange.com/questions/335218",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/157012/"
] |
From Wikipedia on Single responsibility principle SoC
<blockquote>
... class should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by the class. All its services should be narrowly aligned with that responsibility
class or module should have one, and only one, responsibility . As an example, consider a module that compiles and prints a
report. Imagine such a module can be changed for two reasons. First,
the content of the report could change. Second, the format of the
report could change. These two things change for very different
causes; one substantive, and one cosmetic. The single responsibility
principle says that these two aspects of the problem are really two
separate responsibilities, and should therefore be in separate classes
or modules. It would be a bad design to couple two things that change
for different reasons at different times.
The reason it is important to keep a class focused on a single concern
is that it makes the class more robust. Continuing with the foregoing
example, if there is a change to the report compilation process, there
is greater danger that the printing code will break if it is part of
the same class.
</blockquote>
Is there a rule or general guideline on how to define responsibility of methods in class?
When we talk about objects inanimate objects its easy to see compile report and print report is two different responsibilities. However when it comes to animate objects like dog it becomes not so evident what methods are different in nature.
Say you have:
<pre><code>class dog {
public $breed;
public $age;
public $color;
function bark($loudness) (
...
)
function pee($amount) (
...
)
function run($speed, $distance, $direction) (
...
)
function growl() (
...
)
function drink() (
...
)
}
</code></pre>
how can you tell what responsibility should be in a separate class?
|
<h2>General</h2>
Single Responsibility has something to say about semantic redundancy and false semantic cohesion of code fragments (modules, classes, methods functions). The problem is (with all SOLID principles) <strong>it's not about applying them, it's about identifying a violation them</strong>. Once you think it violates SRP you can make a thought experiment (coordinated with your business people) your code has to pass to verify your assumption.
<h3>Semantic redundancy</h3>
If a functional requirement occurs and you have to change two code fragments for the same reason you violate the SRP.
This often happens if a developer does not know that some logic already exists that will solve a probem for him and he develops it once again.
<h3>False semantic cohesion</h3>
If a functional requirement occurs and you produce an unintentional side effect in another code fragment you violate the SRP.
This happens if technically savvy developers try to reduce code duplication without consider semantics.
E.g. a local part check for an email address was implemented at different locations of your code base. A developer extracts the code into one central method because it looks technically equal. Another time another developer has to change the local part check for one case and he does not check if all other usages will be correct under the new implementation.
<h2>Conclusion</h2>
Duplicate code may be a reason to think of SRP violation. But you have to be careful if you put code together that seems to be equal.
You cannot escape from semantics. You have to identify what the code really means in the context to decide if there is a violation of SRP.
You can break it down to: You have to put things together that belong together and you have to isolate thing that do not belong together. But this is an identification process.
In your example you have a Dog class with several methods that represent actions of a dog. If you are paranoid you can see your first redundancies: All methods can be "executed". So "Execution" is what they all have in common. But we have a hard time to extract this part as the programming language itself takes care of the execution. So there are lower boundaries to SRP that are related to the used programming language.
Ok. Your dog barks, pees, drinks, growls and runs. I only can say, that I currently see no violation of SRP. But that doesn't mean there isn't. Once you find an indicator that may be semantically redundant or false in the context of semantic cohesion you can ignite.
After all I want to say that sometimes some SRP violation can be tolerated as the code is expected to be stable in changes. But this is only about to be pragmatic and never a rule.
|
The responsibility here is <code>dog</code>. Dog is as dog does.
Many people read the Single Responsibility Principle as <em>"Must do only one thing."</em> That's not what it means. Bob Martin, author of the principle (and prolific source of mass confusion for inexperienced programmers everywhere) says it like this:
<blockquote>
Every class should have only one reason to change.
</blockquote>
I view a class as embodying a "purpose" or "area of responsibility." If you write a class that contains <code>Create</code>, <code>Read</code>, <code>Update</code> and <code>Delete</code> methods, those aren't four responsibilities; they are just one: <em>Data Access.</em>
|
https://softwareengineering.stackexchange.com
|
62,265
|
[
"https://mathoverflow.net/questions/62265",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2284/"
] |
This semester, I teach an introduction to probability course tailored for students with no science background and so with very <em>very</em> little prerequisites. We started with the basics of analytic combinatorics then moved on to random variables and the study of common laws (binomial, hypergeometric, geometric, Poisson). The audience being what it is, I try to avoid as much as possible calculus derivations of probability facts.
For some aspects of the course, it worked out well (for instance the derivation of the expectation of the binomial law) but because I am barely more knowledgeable than my students when it comes to probability, I have been unable to answer this question:
<blockquote>
Is there a set of natural probability properties which characterizes the discrete Poisson law?
</blockquote>
If yes, then I could use this as a definition of the Poisson law, which would suit my students better than saying "it's the law such that <span class="math-container">$P(X=k)=e^{-\lambda}\frac{\lambda^{k}}{k!}$</span>". By natural above, I want to convey the meaning that I hope they can be formulated using natural language (like, say, memorylessness) rather than using analytic objects.
More precisely, what I have in mind is the following:
<blockquote>
Is there such a set of properties which would make it at least a little intuitively plausible that the sum of two variables following Poisson law also follows Poisson law?
</blockquote>
Of course, the proof of the above fact is completely elementary, but it would still be above the level of everyone in the audience except perhaps the 3 top students.
Note that I would be happy even if proving that this set of properties characterize Poisson law turned out to be much harder than anything I will do in this course (or even much harder than anything I know myself about probability), because what I am looking for is not logical rigour but rather psychological efficiency: in 10 years, my students will have completely forgotten what a derivative is, but I would like them to be able to recollect something if confronted with an epidemiological survey using random variables (at least my most successful students use this course to strengthen their math knowledge before studying medicine).
I realize this question is very elementary, and would understand if it is deemed inappropriate, but the standard references I might consult on the subject will invariably (and with good reasons) develop much more calculus that my students will ever know before dealing with such questions (typically, they will characterize the Poisson law as the limit of the binomial law via Stirling's formula).
|
What about the characterization of Poisson point processes ?
Let us consider a counting process $(N(t))_{t \ge 0}$. That is, $N(0)=0$, $N(t)$ only increases by jump of height $1$, and is right continuous. You can see $N(t)$ as the number of points of a random set in $]0,t]$.
Then $N(t)_t$ is a homogeneous poisson point process if and only if :
1) the increments are independent
2) the increments are stationary : $N(t+s)-N(t)$ has the same law as $N(s)$.
(Maybe there is a further regularity assumption).
This implies that the increments are Poisson distributed : there exists $\lambda$ such that $N(s)$ is distributed according to Poisson with parameter $s\lambda$ for all $s$. This shows that under seemingly general conditions the Poisson distribution appears.
You can also see this from a more geometrical point of view, by considering more general point process than on the line.
|
We can't expect a completely finite way for the Poisson distribution to arise, since the number $e$ must come from somewhere. On the other hand, it should definitely not be necessary to introduce Stirling's formula.
I think the most natural approach is to define Poisson($\lambda$) as the limit distribution of the number of heads in a sequence of $N$ independent flips of a biased coin with probability $\lambda/N$ of heads.
This must be accompanied by some derivation showing that there is such a limit, and leading to the formula you state. Such a derivation will use binomial coefficients and the definition of the number $e$, but not much more.
But even without the derivation, if we just assume that the limit exists, it shows why the sum of two independent Poisson variables is again Poisson.
|
https://mathoverflow.net
|
28,118
|
[
"https://physics.stackexchange.com/questions/28118",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9129/"
] |
The theory I've recently come to postulates that:
<ol>
<li>The volume of space filling the universe is finite and is constantly growing, thus the boundaries of the universe are constantly expanding.</li>
<li>The expansion of the universe's boundaries is caused by light that is converted into fresh space while reaching the universe's boundaries from within.</li>
<li>The expansion of the universe's matter is caused not by kinetic reasons of a big bang somewhere in a distant past but instead by the tendency of the matter to distribute itself evenly across the ever-increasing volume of space (which is perhaps connected to the cosmological constant).</li>
</ol>
Well, is it worth any constructive discussion (any existing theories if this kind?) or is it another example of why amateur physicists should not post their lunatic theories on this forum? And I like the second postulate best. Is there a way for it to branch into a separate theory if the combination of the three doesn't work out?
|
What you are describing looks like a hypothesis to me. A hypothesis is an idea. You have an idea. A theory, in the sense it is used by modern physics, is an idea about how the universe works which is supported by some rigorous elements, whether we're talking about some mathematical explorations (such as in the case of string theory, or back in the day, relativity), or observations (early chemical experiments). A theory has withstood challenges and attempts to discredit it by very knowledgeable and determined people.
In short, you should explore your ideas, but you should always have a way to eliminate invalid ideas by testing them or finding their flaws.
Most important of all, a theory should be considered nill (not bad, just without proven utility) if there are no justifications for it outside of your own intuition. Having said that, if your intuition tells you there is a good chance of there being something there, then you definitely should try developing it.
The important aspects of a new theory should be:
A) it explains our universe as well or better as existing theories
B) explains currently unexplained things
C) can make predictions about things not yet observed.
If such predictions from your theory turn out to be false, then your theory has been falsified and you have competently practiced science. If on the other hand the predictions are accurate, then you have joined the club of frontier physicists!
The better a theory is, the more it seems valid as a function of how much people try to tear it apart, and fail. This process often leads to additional unexpected discovery.
Don't get discouraged by your first or currently favorite idea not turning out to be good - in a class I took a few years ago on evolutionary computation, our teacher told us that Einstein invented and mentally tested several ideas PER MINUTE when he was a patent clerk. This is less impressive (yet still very impressive) than it sounds if you understand that in science often discovery is a process of search-and-evaluate. Ideas are a dime a dozen, quite literally. What makes good ideas live longer than bad ones is that they withstand scrutiny.
The toolkit that competent scientists have which many regular citizens lack is refined training for knowing what is plausible and what not, in the physical world. You can have a theory that contradicts existing theories, but it had better be EXPLANATORY, not just POSTULATORY. Because existing theories explain a lot, a new theory that hopes to replace existing ones needs to be better at everything the other theories do.
Harsh, honest and educated scrutiny is the best way to sort good ideas from bad ones.
Good luck!
|
The current theory for describing the large scale structure of the universe is General Relativity and in particular the FLRW metric. GR gives us a set of equations into which we can feed experimental data and from which get predictions. So far the predictions have agreed with every experiment we've done, and that's why we all believe GR.
To get anyone to take your ideas seriously you'll have to make them quantitative. To take your suggestion 2, what equations describe the conversions of photons into space? How do these equations avoid perturbing the behaviour of photons we observe in the lab? To take your suggestion 3, we know all matter/energy attacts all other matter/energy due to gravitation so matter tends to clump together not spread apart as you suggest - that's why we see stars and galaxies. What equations describe your suggestion that matter tends to spread itself evenly, and why don't we see deviations from the matter distribution predicted by gravity?
It's fun to think about new theories, and even the most old and boring of us did this as excitable young students, but most of us have had to admit that Einstein did it first and he did it best. Unless you can firm up your ideas to something I can do calculations with, you're unlikely to get many physicists to be interested.
|
https://physics.stackexchange.com
|
148,799
|
[
"https://softwareengineering.stackexchange.com/questions/148799",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2673/"
] |
<strong>When we say 'documentation' for a software product, what does that include and what shouldn't that include?</strong>
For example, a recent question asked if comments are considered documentation?
But there are many other areas that this is a valid question for as well, some more obvious than others:
<ul>
<li>Manuals (obviously)</li>
<li>Release notes?</li>
<li>Tutorials</li>
<li>Comments</li>
<li>Any others?</li>
</ul>
Where is the line drawn. For example, if 'tutorial' is documentation, is 'video tutorial' documentation, or is it something else?
Generally, something in software isn't 'done' until it is implemented, tested and documented. Hence this question, what things should we consider as part of documention to consider something 'done'.
<hr>
<sub>Question inspire from recent customer feedback at our conference indicating that our doc needed more 'samples', which we previously hadn't been as good at considering as we should have been.</sub>
<sub>Audience: Software developers using our database(s), programming languages and associated tooling (such as admin clients to said DB)</sub>
|
The goal of documentation is to describe and explain the software product, so you could define the documentation to be the set of artefacts that contribute to that description or explanation. You'd probably not consider related <em>actions</em> as part of the documentation: e.g. a week-long training course is not documentation but the course materials are; a five-minute whiteboard chat is not documentation but an image of the whiteboard is.
Keeping the goal (explaining the software) in mind, the documentation is finished when the <em>customer is satisfied with the explanation</em>: just as the software is finished when the customer is satisfied with the software. Bear in mind that the customer for the documentation is not always the same as the customer paying for the software: support personnel, testers, salespeople and others will all need some understanding of what the software does and how it works.
This helps understand where your boundary for what should be included in the documentation lies. Using "reader" as a convenient shorthand, though we accept that videos or audio could be included: anything that helps the reader gain the information they need about the software is documentation they can use, everything else isn't. If your customer needs detailed walk-throughs of all their use cases, then that needs to be part of the documentation. Your developers probably need source code, information about your version control repository and build instructions: that's documentation for them, but as described above it wouldn't be part of the customer's documentation.
|
I think you took away the wrong part from your conversation at a conference. Modern software development methodologies advocate that the development team should be working closely with its customers (or a product owner who's acting as a customer proxy). For all work delivered, definition of "done" is something that is negotiated between the team and its customer and is done on recurring basis as the software is being developed.
The problem you ran into is that you had a disconnect between what you assumed the customer needed and what they expected you to deliver so at the end you got the "hey where are all the samples" surprise.
As far as, what is documentation... well, it's pretty much everything you listed and maybe few more things that you didn't. But nobody can tell you how much documentation your project needs. Every project is different and it is up to your team, your product owner and your customers to determine the level and type of documentation that is required for your project.
Some factors that would come into play:
<ul>
<li>Are you developing software v1.0 and them moving on to other projects or is this on-going, long term project? Comments/design docs become much more important in the latter case. On the other hand if your customer is a mom-and-pop donut shop and you are writing a website for them that you will never see... well I guess code documentation is nice but not that important.</li>
<li>Are you developing an mobile game or software that is controlling heart rate monitor in a hospital. Guess which one will have definition of "done" that has much more documentation?</li>
<li>Are your customers typical end-users or are they other developers? Do you have an API/SDK that you are exposing?</li>
<li>What is the level of expertise of your customers? This affects the choice of video tutorial vs. written material vs. some kind of interactive tutorial app</li>
<li>Do your customers care about what changed from version to version. Some do. Most don't. For few it's the law (or close to that) to care.</li>
</ul>
|
https://softwareengineering.stackexchange.com
|
477,858
|
[
"https://physics.stackexchange.com/questions/477858",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/195488/"
] |
Can someone in simple terms why you would use Nyquist frequency limits when processing a signal? What benefit does it provide, and how does it affect the results? And how does it relate to the Nyquist rate?
|
The Nyquist sampling theorem (sometimes called the Nyquist-Shannon sampling theorem) says, if you have a signal that is bandlimited with bandwidth <span class="math-container">$B$</span>, then if you sample it with a sampling period <span class="math-container">$T_s$</span> <em>strictly</em> less than <span class="math-container">$1/2B$</span>, then the original signal can be perfectly reconstructed from the samples.
We call the minimum sampling rate for ideal reconstruction, <span class="math-container">$f_N = 2B$</span> (<span class="math-container">$f_N$</span> being in samples per second and <span class="math-container">$B$</span> in hertz), the <em>Nyquist limit</em>.
If you sample a signal with a sample rate greater than the Nyquist limit, it is (in principle) possible to perfectly reconstruct the original continuous-time signal.
If you sample a signal with a sample rate below the Nyquist limit, you can not perfectly reconstruct the signal due to aliasing.
So if you want to retain "complete" information about the signal you are sampling, you must sample above the Nyquist limit.
|
As suggested by a comment, you should give us some context. There a several ways to answer the question. I'll provide a partial answer that is particularly relevant to physics, esp discrete periodic systems such as atoms in a solid. But there are other aspects, especially as concerns continuous systems and time-domain questions.
For a discrete periodic system, the Nyquist frequency is the highest frequency possible. There are no higher frequencies.
It's easier to visualize space rather than time :-) so let's consider spatial frequencies. Imagine a system of balls threaded on a string, with the distance between neighboring balls the same for all neighboring pairs. A low spatial frequency / long wavelength looks like a wave. As you increase the spatial frequency, the wavelength gets shorter. Continue increasing the spatial frequency until the wavelength is two times the ball spacing. At that frequency one ball is displaced "left", the next "right", the third "left" again, etc. Every other ball is displaced maximally to the left or right.
There is no higher spatial frequency. Any arbitrary waveform on the system will consist of a sum of frequencies up the maximum. There simply are no other frequencies to consider.
From a physics point of view, there is no consequence. From a math point of view, summations are limited, and algorithms to speed the calculation ("Fast Fourier Transform") can be used.
|
https://physics.stackexchange.com
|
705,179
|
[
"https://physics.stackexchange.com/questions/705179",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/123113/"
] |
My understanding of the flatness problem is that it says that if we leave out dark energy and inflation, then the density parameter <span class="math-container">$\Omega(t)$</span> tends to <span class="math-container">$\infty$</span> or <span class="math-container">$0$</span> unless we have <span class="math-container">$\Omega(t) = 1$</span> exactly. Thus, <span class="math-container">$\Omega(t) = 1$</span> is an unstable equilibrium point, making it very strange to observe <span class="math-container">$\Omega(t_{0})\approx 1$</span> today.
My question is, why is this called the "flatness problem?" I don't see the connection to geometry or curvature.
I understand that if <span class="math-container">$\Omega(t_{0})$</span> is close to <span class="math-container">$1$</span>, then <span class="math-container">$\Omega_{K}(t_{0})\equiv 1-\Omega(t_{0})$</span> would be close to zero, but how does this relate to the actual curvature value <span class="math-container">$K$</span>? In particular, isn't <span class="math-container">$K$</span> supposed to be constant (so the deviation from flatness is fixed)?
|
Relation between curvature <span class="math-container">$k$</span> and density parameter <span class="math-container">$\Omega$</span> can be described with 1st Friedmann equation.
<span class="math-container">$$(\frac{\dot{a}}{a})^2 +\frac{kc^2}{a^2} = \frac{ 8\pi G }{3}\rho$$</span>
Define Hubble parameter be <span class="math-container">$H = \dot{a}/a$</span>, and density parameter be <span class="math-container">$\Omega = 8\pi G \rho/3H^2$</span>, then comparison between <span class="math-container">$\Omega$</span> and 1 has same meaning with comparison between <span class="math-container">$k$</span> and 0.
<span class="math-container">$$\frac{kc^2}{a^2 H^2} = \Omega-1 $$</span>
It says <span class="math-container">$|\Omega-1| \propto 1/\dot{a}^2$</span>. If there is no inflation, <span class="math-container">$\dot{a}^2$</span> will decrease, and <span class="math-container">$\Omega$</span> increases. Our current observations said <span class="math-container">$\Omega \simeq 1 $</span>, so density parameter has to be closer to 1 in the early universe stage. It is called flatness problem.
|
As you pointed out, as we go back in time, <span class="math-container">$\Omega_{\rm total}-1$</span> needs to be very small (i.e., <span class="math-container">$|\Omega_{\rm total}-1| \propto 10^{-61}$</span>)
This situation brings to mind the following question;
Why should the universe have started from such a unique situation?
In other words, why would the universe initially started from <span class="math-container">$|\Omega_{\rm total} -1| = 0.00000...000001$</span> when it could have taken different <span class="math-container">$\Omega_{\rm total}$</span> values?
The situation can be viewed from the following perspective;
If the universe took any other value, we wouldn't be here, so it had to take that value (anthropic principle). But physicists don't like the anthropic principle. So the only solution is to come up with the idea that for any initial <span class="math-container">$\Omega_{\rm total}$</span> value, the universe would end up with <span class="math-container">$\Omega_{\rm total} \approx 1$</span>. The inflation mechanism provides that.
|
https://physics.stackexchange.com
|
205,712
|
[
"https://math.stackexchange.com/questions/205712",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/43306/"
] |
I have a set of N random integers between A and B.
Assuming that my random number generator is equally likely to return any integer between A and B, how can I calculate the probability that the next random integer is already present in my set?
I want to estimate how many random numbers I should generate in a batch such that I can say with probability P that atleast one of the new integers does not already exist in the set.
Thanks
|
To complete your argument you have to show that $\ln(p_n)/\ln(n) = 1.$
Now, <em>if</em> you have that $p_n \sim n\ln (n),$ then $\ln(p_n) = \ln(n) + \ln\ln(n) + o(1)$, and so indeed $\ln(p_n)/\ln(n) \to 1$ as $n \to \infty$. So at least
this is consistent with what you are trying to prove.
On the other other hand, what you know is that
$n\ln(p_n)/p_n \to 1$ as $n \to \infty$. Taking $\ln$ gives
$$\ln(n) + \ln\ln(p_n) = \ln(p_n) + o(1),$$
and so
$$\ln(n)/\ln(p_n) + \ln\ln(p_n)/\ln(p_n) = 1 + o(1).$$
Can you finish from there?
|
There is a problem with this approach. You took $x = p_n$, but that means $\ln(x) = \ln(p_n)$ and <i>not</i> $\ln(n)$.
|
https://math.stackexchange.com
|
936,611
|
[
"https://math.stackexchange.com/questions/936611",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/58742/"
] |
let $p,q$ is postive integer,and such
$$\dfrac{95}{36}>\dfrac{p}{q}>\dfrac{96}{37}$$
Find the minimum of the $q$
maybe can use
$$95q>36p$$
and $$37p>96q$$
and then find this minimum of the value?
before I find a
$$2.638\approx \dfrac{95}{36}>\dfrac{49}{18}\approx 2.722>\dfrac{96}{37}\approx 2.59 $$ is not such condition
idea 2: since
$$\dfrac{95}{36}=\dfrac{95\cdot 37}{36\cdot 37}=\dfrac{3515}{1332}$$
$$\dfrac{96}{37}=\dfrac{96\cdot 36}{36\cdot 37}=\dfrac{3456}{1332}$$
so
$$\dfrac{3515}{1332}>\dfrac{p}{q}>\dfrac{3456}{1332}$$
so
$$p\in(3456,3515),q=1332$$
|
An interesting trick so solve such kind of problems is to consider the continued fraction of the LHS and the RHS. We have:
$$\frac{95}{36}=[2;1,1,1,3,3],\qquad \frac{96}{37}=[2;1,1,2,7]$$
hence
$$\frac{13}{5}=[2;1,1,2]$$
just lies between the LHS and the RHS, and it is the rational number with the smallest denominator lying in that interval.
|
We have
$$2.64\gt a=\frac{17575}{5\cdot 36\cdot 37}=\frac{95}{36}\gt \color{red}{\frac{13}{5}}=2.6=\frac{17316}{5\cdot 36\cdot 37}\gt\frac{96}{37}=\frac{17280}{5\cdot 36\cdot 37}=b\gt 2.59.$$
Note that
$$\frac{11}{4}\gt a\gt b\gt\frac{10}{4}$$
$$\frac{8}{3}\gt a\gt b\gt\frac{7}{3}$$
$$\frac{6}{2}\gt a\gt b\gt\frac{5}{2}$$
$$\frac{3}{1}\gt a\gt b\gt\frac{2}{1}$$
Hence, the minimum of $q$ is $5$.
|
https://math.stackexchange.com
|
181,694
|
[
"https://mathoverflow.net/questions/181694",
"https://mathoverflow.net",
"https://mathoverflow.net/users/38889/"
] |
What is known about finite groups $G$ for which there exists a Galois extension $K$ of $\mathbb{Q}$ ramified only at $2$ such that $\text{Gal}(K/\mathbb{Q}) \cong G$ ? More generally, which groups can be realized over $\mathbb{Q}$ with no ramification outside a given (finite) set of primes?
I am thus interested in results of two kinds:
Realization of specific groups.
Examples of groups which are not realizable in this manner, and restrictions on groups which are.
|
Let me first note that there is a slight ambiguity when one says "ramified only at 2". Strictly speaking, that means that the extension is unramified at every place
of $\mathbb Q$ except 2, including infinity. The latter mean that the extension is totally real. Often, however, "ramified only at 2" means "ramified only possibly at 2 and $\infty$", and it is probably what you mean. Here, to remove
ambiguity, for $S$ a finite set of places of $\mathbb Q$, I will use "ramified only at $S$" in the strict sense.
That being said, the short answer is that whatever the finite set $S$, there are strong restrictions on the finite groups $G$ that can appear as the Galois group of an extension of $\mathbb Q$ unramified outside at $S$, but that in general, to "describe" all these restrictions (we may mean different things by that) is in general an open problem. To see what kind of restrictions can appear,
let us consider several situation, from the very particular to the general.
If $S=\emptyset$ or $S=\{\infty\}$, then Minkowski's theorem tell that there is no nontrivial extension unramified outside $S$, so the only possible Galois group $G$ is the trivial one. A very strong restriction indeed.
If $S$ is any set, but you try to determine what <strong>abelian</strong> $G$ may appear, then the answer is given by class field theory. Precisely, if $S=\{\ell_1,\dots,\ell_k,\infty\}$, then the abelian group which appear are the ones that are quotient of $\mathbb Z_{ell_1}^\ast \times \dots \mathbb Z_{\ell_k}^\ast$, and it is obvious that many abelian groups are not of this type (e.g. $\mathbb Z/\ell \mathbb Z$ for $\ell > \ell_1,\dots,\ell_k$). If $S=\{\ell_1,\dots,\ell_k\}$, replace $\mathbb Z_{ell_1}^\ast \times \dots \mathbb Z_{\ell_k}^\ast$ by its quotient by the diagonal subgroup $\{1,-1\}$.
If $S$ is any set, and you're interested in groups $G$ that are $p$-groups (for a $p$ that may or may nor be in $S$), then again, class field theory can help you
because of Frattini's argument saying that a $p$-group is generated by any set that maps subjectively to the maximal abelian $p$-torsion quotient of $G$.
So by the above, the $p$-groups $G$ appearing this way have a system of generators with less elements than the dimension over $\mathbb F_p$ of the maximal $p$-torsion quotient of $\mathbb Z_{ell_1}^\ast \times \dots \mathbb Z_{\ell_k}^\ast$. Is that all? No. We can describe in some cases all the $p$-group $G$ appearing, but not in all. For instance, with $S=\{2,\infty\}$, and $p=2$, the groups $G$ appearing are exactly the $2$-groups having a system of two generators, one of them being of square 1. For $S=\{2\}$, the only $2$-groups appearing are the cyclic groups. In general, the case $p \in S$ is better understood than the case $p \not \in S$. In the latter case, it is a folklore conjecture, on which not much is known, that only finitely many $p$-groups $G$ appearing (one can check readily that the conjecture is true for abelian $p$-groups, by the above paragraph).
If now we consider general finite groups $G$, well, the above shows that there are restrictions, but determining them all is largely open. For instance, a conjecture of Shafarevich states that there is an $n=n(S)$ such that every group $G$ which is the Galois group of an extension unramified outside $S$ has a system of generators with less than $n$ elements. But this is open in every non-trivial case.
Let me also mention a simple but too little known result of Serre, even if it is not strictly speaking part of your question: [Edit: Sorry, I messed up the statement of the result. Here is the right version]. If every finite group is a Galois group over $\mathbb Q$ (if the inverse galois problem has a positive solution) then every finite group is the Galois group of an extension of $\mathbb Q$ unramified at infinity -- that is, a real extension. In other words, according to the inverse Galois problem, there should need no restriction of $G$ in the case where $S$ is the set containing all finite places, but not the place at infinity.
|
Concerning the question of Pablo that follows Joël's answer:
If $k$ is an algebraically closed field, then the situation is completely understood, thanks to work of Grothendieck (in characteristic 0) completed by the proof of Abhyankar's conjecture by Raynaud and Harbater.
Precisely, let $C/k$ be a smooth affine curve, complement of $r\geq1$ points in a projective smooth connected curve of genus $g$, over the alg. closed field $k$.
Let $F=k(C)$ be the field of rational functions on $C$.
Extensions of $C$ that are unramified on $C$ correspond to connected
étale covers of $C$.
If $\mathop{\rm char}(k)=0$, then Grothendieck showed (this is the main result of SGA 1) that a finite group $G$ is the Galois group of a connected étale cover of $C$ if and only if it is generated by $2g+r-1$ elements. More precisely, the Galois group of the maximal extension of $k(C)$ unramified on $C$ is a free profinite group on $2g+r-1$ elements. It is worth observing the fundamental group of a compact connected Riemann surface of genus $g$ deprived from $r\geq1$ points is a free group on $2g+r-1$ generators. (The case $r=0$ is also treated by Grothendieck, $G$ then needs to be generated by $2g$ elements $a_1,b_1,\dots,a_g,b_g$ satisfying the relation $(a_1,b_1)(a_2,b_2)\dots(a_g,b_g)=1$.)
If $\mathop{\rm char}(k)=p>0$, then $C$ has « more » étale covers. For example,
if $C=\mathbf A^1_k$, then $f\colon C\to C$ given by $f(t)=t-t^p$ is a connected étale cover of degree $p$. Raynaud (when $C=\mathbf A^1_k$) and Harbater (in general) showed that a finite group $G$ is the Galois group of a connected étale cover of $C$ if and only if the quotient of $G$ by the (normal) subgroup generated by its $p$-Sylow subgroups is generated by $2g+r-1$ elements.
Specific examples had been exhibited by many mathematicians, such as Abhyankar itself (he has a long series of papers in this style) and Nori (when $G$ is a finite group of Lie type).
|
https://mathoverflow.net
|
5,690
|
[
"https://chemistry.stackexchange.com/questions/5690",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/2018/"
] |
In nucleophilic aromatic substitution reactions, why do fluorides react faster than bromides?
Ordinarily bromide is a better leaving group than fluoride, e.g. in <span class="math-container">$\mathrm{S_N2}$</span> reactions, so why isn't this the case here? The only thing I can think of is that fluorine is more electron-withdrawing (via the inductive effect), which could stabilise the Meisenheimer complex formed as an intermediate.
|
The key point to understanding why fluorides are so reactive in the nucleophilic aromatic substitution (I will call it S<sub>N</sub>Ar in the following) is knowing the rate determining step of the reaction mechanism. The mechanism is as shown in the following picture (Nu = Nucleophile, X = leaving group):
<img src="https://i.stack.imgur.com/uvM8U.png" alt="enter image description here">
Now, the first step (= addition) is very slow as aromaticity is lost and thus the energy barrier is very high. The second step (= elimination of the leaving group) is quite fast as aromaticity is restored. So, since the elimination step is fast compared to the addition step, the actual quality of the leaving group is not very important, because even if you use a very good leaving group (e.g. iodine), which speeds up the elimination step, the overall reaction rate will not increase as the addition step is the bottleneck of the reaction.
Now, what about fluorine? Fluorine is not a good leaving group, but that doesn't matter as I said before. It is not the leaving group ability of <span class="math-container">$\ce{F-}$</span> which makes the reaction go faster than with, say, bromine or chlorine, but its very high negative inductive effect (due to its large electronegativity). This negative inductive effect helps to stabilize the negative charge in the Meisenheimer complex and thus lowers the activation barrier of the (slow) addition step.
Since this step is the bottleneck of the overall reaction, a speedup here, speeds up the whole reaction.
Leaving groups with a negative mesomeric effect (but little negative inductive effect) are not good at stabilizing the Meisenheimer complex, because the negative charge can't be delocalized to them.
|
Fluorine is the most electronegative element, and the fluoride anion is also much smaller and less polarizable than any of the other halogen anions, making its activity much more dependent on solvent effects. In protic solvents, fluoride tends to be very strongly solvated as a hydrogen bond acceptor and is thereby stabilized, resulting in high leaving-group tendency and very low nucleophilicity. The larger halogens can be better leaving groups in aprotic solvents, however, where their bulk and polarizability results in greater comparative stability, and the stabilizing effect of hydrogen bonds is not present. I guess it's conceivable that the presence of fluorine atoms would better stabilize the Meisenheimer complex than other halogens, though I don't know for sure. Still, since all halogens are electron-withdrawing by induction or negative hyperconjugation, the presence of more strongly electron-withdrawing groups (e.g., nitro) conjugated to the $\pi$ system of the ring would still be necessary.
What follows is not a direct answer to your question, but a slight segue into a related subject:
The rate-determining step in SnAr mechanisms is typically the initial nucleophilic attack resulting in formation of the Meisenheimer complex, or occasionally the final proton transfer if neutral nucleophiles are used. The loss of the leaving group is normally a fast step, given the high energy of the Meisenheimer complex intermediate. Once the complex is formed, loss of the leaving group should be rapid. Therefore, I suspect aprotic solvents would be preferable in many situations (particularly with neutral nucleophiles), irrespective of the effect on the leaving group, since they don't hamper the activity of the nucleophile in the way that protic solvents can.
|
https://chemistry.stackexchange.com
|
350,636
|
[
"https://physics.stackexchange.com/questions/350636",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/30522/"
] |
Is there somewhere on the internet I can find cosmological redshift data. In particular, I would like to know the redshift around the time when the acceleration of the Universe began to accelerate.
|
One of the main places where data about galaxies gets aggregated is the <A HREF="https://ned.ipac.caltech.edu/" rel="nofollow noreferrer">NASA Extragalactic Database</A> (NED). For example, <A HREF="https://ned.ipac.caltech.edu/cgi-bin/objsearch?objname=M101&extend=no&hconst=73&omegam=0.27&omegav=0.73&corr_z=1&out_csys=Equatorial&out_equinox=J2000.0&obj_sort=RA+or+Longitude&of=pre_text&zv_breaker=30000.0&list_limit=5&img_stamp=YES" rel="nofollow noreferrer">here's the information page for M101</A> with the default cosmology in their search form. In particularly you want to look at the redshift-independent distances, and the redshift data points. Using the 'Metric Distance' you can calculate the cosmological redshift it would have if it weren't moving (for a given cosmology) by numerically inverting <A HREF="https://arxiv.org/pdf/astro-ph/9905116.pdf" rel="nofollow noreferrer">equation 15 from Hogg's cosmology calculations summary paper</A> (probably have to numerically integrate, too).
Note that the peculiar velocity (velocity relative to Hubble flow) is usually around hundreds of kilometers per second. So, for any redshift greater than about $0.01$ (equivalent to a radial velocity of about $3,000\operatorname{km}\operatorname{s}^{-1}$) is almost certainly entirely dominated by the cosmological redshift of the object. There are a lot of databases replete with redshifts of galaxies that stretch back to round $z=1$ for ordinary galaxies, and much further carefully selected galaxies and active galactic nuclei/quasars. For example: <A HREF="http://skyserver.sdss.org/dr1/en/sdss/data/data.asp" rel="nofollow noreferrer">Sloan Digital Sky Survey (SDSS)</A>, <A HREF="http://deep.ps.uci.edu/" rel="nofollow noreferrer">DEEP2</A>, the <A HREF="http://adsabs.harvard.edu/abs/2012ApJS..200....8K" rel="nofollow noreferrer">AGN and Galaxy Evolution Survey (AGES)</A>, and <A HREF="http://www.gama-survey.org/" rel="nofollow noreferrer">Galaxy and Mass Assembly (GAMA)</A>. This list is nowhere near complete, of course.
|
Redshift of the time when the universe started to accelerate:
From Friedmann's equations:
$$\dot{a}=aH=H_0\sqrt{\Omega_{m0}/a+a^2\Omega_{\Lambda 0}}$$
required is:
$\ddot{a}>0$.
Calculation gets you
$$a=\left(\frac{\Omega_{m0}}{2\Omega_{\Lambda 0}}\right)^{1/3} \approx 0.6$$
$\rightarrow z=1/a-1 = 0.67$
|
https://physics.stackexchange.com
|
1,351,142
|
[
"https://math.stackexchange.com/questions/1351142",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/247415/"
] |
Find a,b,c $\in \mathbb{R}$ for which the function is a) continuous, b) differentiable.
$$f(x)=\left\{\begin{array}{cc}
ax^2+bx+c & x<0 \\
2\sin x+cos x & x\:\ge 0
\end{array}\right.$$
From what I know a function is continuous when the following occurs:
$$\lim_{x\to 0^+\:}f(x) = \lim_{x\to 0^-\:}f(x) = f(0).$$
Calculating the limits would lead me to $c=1$, but what about a and b?
What do I do for proving that it is differentiable? Because from what I know, a function is differentiable when $f'(x_0)$ <em>is a finite number</em>. But I get that $f'(x_0)=2$ which would mean I can have any $a,b,c$?
|
You can use the rational root theorem to guess some roots.
<blockquote>
<strong>Rational root theorem.</strong> All rational roots have the form <span class="math-container">$\frac{p}{q}$</span>,with <span class="math-container">$p$</span> a divisior of the constant term and <span class="math-container">$q$</span> a divisior of the first coefficient.
</blockquote>
For example, in the integer case, one can take <span class="math-container">$q=1$</span>, and one only has to test <span class="math-container">$p = \pm 1$</span>, <span class="math-container">$p = \pm 2$</span>, <span class="math-container">$p= \pm 5$</span>, and then you have it. (you of course could test <span class="math-container">$\pm 7$</span>, <span class="math-container">$\pm 10$</span>, <span class="math-container">$\pm 14$</span>, <span class="math-container">$\pm 35$</span> and <span class="math-container">$\pm 70$</span> as well, but it would be less work to just divide by the <span class="math-container">$x-5$</span>)
Another technique:
<blockquote>
<strong>Descartes rule of Signs.</strong> The number of positive roots of <span class="math-container">$P(x)$</span> is equal to the number of sign changes is the sequence formed by the coefficients of <span class="math-container">$P(x)$</span> or it is less by an even number.
The number of negative roots of <span class="math-container">$P(x)$</span> is equal to the number of sign changes is the sequence formed by the coefficients of <span class="math-container">$P(-x)$</span> or it is less by an even number.
Roots with multiplicity <span class="math-container">$n$</span> are counted <span class="math-container">$n$</span> times.
</blockquote>
In this case it is useless. But if it is a polynomial like <span class="math-container">$x^3+3x^2+4x+2$</span>, you now know that there are no positive roots.
Last technique:
<blockquote>
<strong>Estimating roots.</strong> If you've tested all eligible roots that are smaller than, say, 10, then you can use this. If the first two coefficients are not extremely small compared to the last two, then its a good idea to look to you eligible roots near to <span class="math-container">$$-\frac{\mathrm{second} \; \mathrm{coefficient}}{\mathrm{first} \; \mathrm{coefficient}}$$</span>
</blockquote>
If the root is larger than 10, the <span class="math-container">$x^3$</span> and <span class="math-container">$x^2$</span> term are large enough to be able to ignore the smaller terms, thus this is a good estimation. Write <span class="math-container">$a$</span> for the first coefficient and <span class="math-container">$b$</span> for the second, <span class="math-container">$c$</span> for the third and <span class="math-container">$d$</span> for the fourth.
Now, if <span class="math-container">$x$</span> is large, then <span class="math-container">$ax^3+bx^2+cx+d \sim ax^3+bx^2$</span>. Therefore the roots will be comparable. But <span class="math-container">$ax^3+bx^2=0$</span> gives <span class="math-container">$x^2=0$</span> or <span class="math-container">$ax+b=0$</span>. In the first case <span class="math-container">$x$</span> isn't large enough. In the second case we have <span class="math-container">$ax+b=0$</span>, thus <span class="math-container">$ax=-b$</span>, thus <span class="math-container">$x=\frac{-b}{a}=-\frac{\mathrm{second} \; \mathrm{coefficient}}{\mathrm{first} \; \mathrm{coefficient}}$</span>.
|
You could work by locating the roots approximately by computing at some easy values and noting sign changes - Sturm's Theorem is a heavy duty resource, and Descartes Rule of Signs can be indicative. There are more basic observations too, which can help to narrow the search amongst rational roots - using changes in the sign of the value and the intermediate value theorem.
In this example the polynomial is positive at $x=0, x=-1$ but negative at $x=1$ - so that's located a small positive root. And the polynomial is positive for large $x$ and negative for small (large negative) $x$. Choose $x=\pm10$ as easy to calculate - the value is positive for $x=10$ and negative for $x=-10$.
So if there is an integer root (noting that we've located all three roots approximately) it is greater than $1$ and less than $10$, or less than $-1$ and greater than $-10$ and is a factor of $70$. And every calculation we've done so far has been very easy.
The calculations for $\pm n$ can be done together, because the terms have the same magnitude, and with the options being $2,5,7$ you do $\pm 5$ first because that tells you whether to do $2$ or $7$ if $5$ is a miss.
|
https://math.stackexchange.com
|
404,543
|
[
"https://softwareengineering.stackexchange.com/questions/404543",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/356397/"
] |
I have a code that is something like this in a class
<pre><code>string method x (){
foreach(a in alist){
//do something
}
return string;
}
integer method y (){
foreach(a in alist){
//do something
}
return integer;
}
double method z (){
foreach(a in alist){
//do something
}
return double;
}
</code></pre>
I sense a code smell here in the multiple for loops on the same object. But I am not sure whether it is real or not. Is there any way I can refactor this code? so that I only have one place for the for loop?
|
<h2>It's not inherently a code smell</h2>
There is nothing wrong with having a <code>foreach</code> in multiple methods, <em>unless</em> you always run these three methods consecutively, at which point you can simplify it to:
<pre><code>public void xyz()
{
foreach(a in alist)
{
x(a);
y(a);
z(a);
}
}
</code></pre>
But if you do not always call these methods in the same succession (which I'm suspecting is the case), then this does not apply.
<blockquote>
I feel like going through the same object different methods is a code smell
</blockquote>
If you were calculating a value based on the same input parameters, I'd agree. However, this is not the case here. Each method requires its own enumerator to do its own enumeration.
<blockquote>
What if change the name of the alist variable in the future? I would have to make change at least 3 places.
</blockquote>
This would apply to every variable, method, class name, or namespace you would ever use. It would literally render you unable to write <em>any</em> code that wouldn't allegedly smell.
With the right IDE (or extension), this isn't a problem, since you can refactor names, which will change all references to that name across the codebase.
In the case of Visual Studio, that's done by pressing F2 while your cursor is on the name (variable/method/class/...).
<blockquote>
Is there any way I can extract the foreach to a common place and get the same desired output?
</blockquote>
Not meaningfully so. I mean, you could abstract it, but it would add complexity without actually improving performance or maintainability. It would detract from readability, not just because of the complexity increase, but also because you change a well known concept to something homebrewed, which requires additional knowledge for a developer to follow when they read the code.
Several drawbacks, no benefits. No reason to do this.
<hr>
The only reasonable case for a code smell would not be for the enumeration itself, but any preliminary filtering logic. For example:
<pre><code>public string x ()
{
foreach(a in alist.Where(a => a.Foo == "Bar"))
{
//do something
}
return string;
}
// And the same Where() for y() and z()
</code></pre>
Here, the filtering logic itself can be abstracted to avoid needless repetition:
<pre><code>private IEnumerable<A> aBarList => alist.Where(a => a.Foo == "Bar");
public string x ()
{
foreach(a in aBarList)
{
//do something
}
return string;
}
// And the same Where() for y() and z()
</code></pre>
This is just one of many possible implementations. Your vague example code makes it impossible to judge the best implementation for your current case.
|
It's not a code smell. In fact Martin Fowler over at <code>[refactoring.com][1]</code> argues you should repeat a loop if it adds value because of the trivial computing cost of repeating said loop.
You should make alist a function argument and make the function static (if possible in the real world)
|
https://softwareengineering.stackexchange.com
|
39,257
|
[
"https://biology.stackexchange.com/questions/39257",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/19159/"
] |
I am learning about frameshift mutations. Frameshifts can occur due to a nucleotide deletion. Suppose that due to a frameshift, because of a deletion somewhere upstream from the original start codon, two additional start codons are generated, just before the stop codon in the new reading frame. What would happen in terms of translation?
AUG-GCC-AUA-AUG--------UAA
Start Start then stop
|
There is a basic misconception in the question you have asked, which @biogirl has explained. <strong>There is only one start Codon in any mRNA</strong> and it defines the <strong>open reading frame.</strong>
All other AUGs in the open reading frame are simply codons that encode for the Amino Acid Methionine and have no function in the start of translation. There are factors other than AUG that determine the start of translation.
So a frame shift that gives you an additional AUG only means that you will have a different Amino Acid encoded for in the resulting polypeptide. A frame shift will generally completely alter the protein product of the gene. If however the frameshift does disrupt the start codon, then it is unlikely that you will have any translation what-so-ever, as the other elements necessary for determining the start of translation will likely not be present in other areas of the coding sequence. In prokaryotes, you need a Shine-Delgarno sequence to initiate translation, and in Eukaryotes, though all of the factors for translation start are not well understood many genes carry a Kozak sequence that indicates to the ribosome the start of the open reading frame.
<strong>The more important codons to look for are introductions of stop codons.</strong> These three codons, <strong>UAA, UAG, and UGA</strong> do not have tRNAs with complementary anticodons (for the most part, as tRNA genes can also sustain mutations that change their anticodon) and therefore all result in the termination of translation if the shifted frame results in the ribosome reading one of the three stop codons in frame.
|
AUG functions as a start codon only when it is at the 1st position of the open reading frame. Whenever AUG is present in between, it codes for methionine amino acid. Go through the basics of translation from a good book.
|
https://biology.stackexchange.com
|
431,317
|
[
"https://electronics.stackexchange.com/questions/431317",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/217779/"
] |
I'm trying to understand the circuitry of TV signal splitters and associated boosters/amplifiers, and I have two questions.
My situation is as follows. I have a TV aerial (antenna) in my loft. It connects to a non-powered box, which I assume must be a passive splitter. From there cables run to sockets in a total of 6 rooms. However, only in two or three of those rooms does a TV connected to the socket show any signal; and even in those rooms, there is no signal unless a booster, located in the room nearest to the aerial, is connected and powered up.
That all sounds sensible, but then I stop being able to understand.
<ol>
<li>Here's the odd thing, and the first question. To make any TV on the system work, the booster has to be connected to the aerial socket, but nothing needs to be connected to the booster output; so long as the booster is connected and switched on, the TV in a neighbouring room will work. It's as though it is somehow sending the amplified signal back up its input cable. But I've looked at circuit diagrams for boosters, and that doesn't look possible. Can anyone explain what is going on?</li>
<li>I am trying to find out what is wrong in the rooms where no signal is ever reported. I read somewhere that if I look across the terminals of the TV socket with an ohmmeter, I should see effectively zero resistance, since there is continuity through the aerial. However, this is not true for any of my sockets. With all devices disconnected, if I look at the resistance at the socket the booster normally connects to, I see about 4k. If I look across any of the other sockets (including those where a signal is successfully received), I see no continuity at all. So I suppose that the passive splitter must have a capacitor or transformer somewhere in its circuitry, but I can't find a circuit diagram anywhere that would show whether this is true. Can anyone say whether this is the case, i.e. whether I should be able to see continuity when looking into a socket?</li>
</ol>
Background information:
<ul>
<li>Until recently we have never tried to use TVs in the rooms where we now find they don't work, so this is probably not a new problem. </li>
<li>In particular, the non-functioning sockets have not been used since the analogue era.</li>
<li>The wiring is probably at least 30 years old, and certainly pre-digital. </li>
<li>I'm in the UK, and the TV signal is digital terrestrial. </li>
<li>Fitting an outdoor aerial to get a better original signal is not an option in our neighbourhood. </li>
<li>The passive splitter and aerial connections are all screw-downs, and they are not conveniently located, so swapping cables around for test purposes is slow and painful.</li>
</ul>
|
You've made some wrong assumptions about what the parts of the system are. The part you're describing as a non-powered passive splitter is actually a powered active splitter. Splitting one aerial signal into six with a passive splitter is unlikely to give you sufficient signal on any of the six outputs, especially if the aerial is in the loft.
The part you're describing as the booster is just the power supply to the active splitter. It sends a DC voltage up the cable running to the active splitter, and filters it out of the cable running to the TV. That's why it has to be powered up for any of the TVs to work.
The rooms where TVs don't work are either down to a faulty output from the splitter or a defective cable. To do a basic test on each cable, disconnect it from the splitter and check that it's open circuit, then short one end together and check that it now shows a short circuit at the other end.
|
It is desirable to have the amplifier be as close to the aerial as possible and certainly prior to the signal being split, however getting mains power to said locations is often problematic.
The solution to this is amplifiers that are powered via one of the output coax lines. The amplifier is sited close to the aerial while a power injection unit is sited close to the TV, These are often sold as "masthead" amplifiers. I believe this is the setup you have.
I don't think you can read much into whether or not there is DC continuity on the output of an amplifier, whether you see it or not depends entirely on the details of the amplifiers internal circuitry.
I would start with end to end continuity and short-circuit tests on the cable runs (note: some sockets have isolation capacitors, so you may need to test from the terminals on the back of the sockets rather than the connections on the face). If you find any opens or shorts then you obviously need to fix them.
Failing that it may be worth swapping around connections to see if the problem follows the cable or follows the connection on the amplifier, but honestly given the age of the amplifier and the difficulty to access i'd be more inclined towards replacing the amplifier at that point.
|
https://electronics.stackexchange.com
|
26,154
|
[
"https://ai.stackexchange.com/questions/26154",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/44278/"
] |
I'm trying to implement Deep Q-Learning for a pet problem having a continuous state space and discretized action space.
The algorithm for table-based Q-Learning updates a single entry of the Q table - i.e. a single <span class="math-container">$Q(s, a)$</span>. However, a neural network outputs an entire row of the table - i.e. the Q-values for every possible action for a given state. So, what should the target output vector be for the network?
I've been trying to get it to work with something like the following:
<pre><code>q_values = model(state)
action = argmax(q_values)
next_state = env.step(state, action)
next_q_values = model(next_state)
max_next_q = max(next_q_values)
target_q_values = q_values
target_q_values[action] = reward(next_state) + gamma * max_next_q
</code></pre>
The result is that my model tends to converge on some set of fixed values for every possible action - in other words, I get the same Q-values no matter what the input state is. (My guess is that this is because, since only 1 Q-value is updated, the training is teaching my model that most of its output is already fine.)
What should I be using for the target output vector for training? Should I calculate the target Q value for every action, instead of just one?
|
As you say, the output of a <span class="math-container">$Q$</span> network is typically a value for all actions of the given state. Let us call this output <span class="math-container">$\mathbf{x} \in \mathbb{R}^{|\mathcal{A}|}$</span>. To train your network using the squared bellman error you need first calculate the scalar target <span class="math-container">$y = r(s, a) + \max_a Q(s', a)$</span>. Then, to train the network we take a vector <span class="math-container">$\mathbf{x'} = \mathbf{x}$</span> and change the <span class="math-container">$a$</span>th element of it to be equal to <span class="math-container">$y$</span>, where <span class="math-container">$a$</span> is the action you took in state <span class="math-container">$s$</span>; call this modified vector <span class="math-container">$\mathbf{x'}_a$</span>. We calculate the loss <span class="math-container">$\mathcal{L}(\mathbf{x}, \mathbf{x'}_a)$</span> and back propagate through this to update the parameters of our network.
Note that when we use <span class="math-container">$Q$</span> to calculate <span class="math-container">$y$</span> we typically use some form of target network; this can be a copy of <span class="math-container">$Q$</span> where the parameters are only updated every <span class="math-container">$i$</span>th update or a network whose weights are updated using a polyak average with the main networks weights after every update.
Judging by your code it looks as though your action selection is what might be causing you some problems. As far as I can tell you're always acting greedily with respect to your <span class="math-container">$Q$</span>-function. You should be looking to act <span class="math-container">$\epsilon$</span>-greedily, i.e. with probability <span class="math-container">$\epsilon$</span> take a random action and act greedily otherwise. Typically you start with <span class="math-container">$\epsilon=1$</span> and decay it each time a random action is taken down to some small value such as 0.05.
|
There are a couple ways you can define the architecture of a DQN. The most common way of doing it is by taking in the states and outputting the value function of all possible actions - this leads to a DQN with multiple outputs. The other, less efficient way, includes taking in an state-action as input and outputting a single real value - this approach is typically avoided since we need to run the model multiple times to get estimates for different actions.
The replay buffer is used to store <span class="math-container">$(S,A,R,S')$</span> transitions as encountered using your <span class="math-container">$\epsilon$</span>-soft policy. We sample one of these transitions from the replay buffer and calculate an estimate of the value function for <span class="math-container">$(S,A)$</span> i.e <span class="math-container">$\hat Q(S,A,\theta)$</span> and then we calculate a target as follows. <span class="math-container">$$target =R+\max_\limits{a'}\hat Q(S',a',\theta^-)$$</span>
Assuming you use the first model, you can then use a Squared error loss function, defined as follows, and modify your parameter as a function of that
<span class="math-container">$$L(\theta) = (target-\hat Q(S,A,\theta))^2$$</span>
Assuming for now the target is fixed (I'll explain this in a minute), only <span class="math-container">$Q(S,A,\theta)$</span> is a function of <span class="math-container">$\theta$</span> in the loss function. <span class="math-container">$Q(S,A,\theta)$</span> corresponds to one output node of your DQN and therefore, as you've already highlighted, when carrying out EBP the parameters are updated such that we make the value of this one node tend to the specified target.
This is just how Q-learning works, we use samples generated by the behaviour policy to create <span class="math-container">$L(\theta)$</span> and then tweak the parameters to minimise the cost. As we do this for more and more samples the network hopefully figures out a way that accommodates for every sample it's been trained on so far (with more emphasis on the most recent samples).
<strong>As to your issue, are you sure you're training on multiple different samples and not just a specific one? it may just be a bug you've overseen.</strong>
<hr />
<strong>Explaining <span class="math-container">$\theta^-$</span></strong>
I used a slightly different notation, <span class="math-container">$\theta^-$</span>, for the parameters used to generate the bootstrapped estimate, <span class="math-container">$\max_\limits{a'}\hat Q(S',a,\theta^-)$</span>. <span class="math-container">$\theta^-$</span> is only matched to <span class="math-container">$\theta$</span> every <span class="math-container">$n^{th}$</span> step because we want to keep the target constant as much as possible. The reason for this is because Q-learning does not necessarily converge when using neural networks partly due to bootstrapping which can cause a divergence of optimisation because of state generalisation. By using this <span class="math-container">$\theta^-$</span> we help prevent things like this from happening.
Ultimately the idea of the replay buffer and the fixed parameter for bootstrapping are to try to convert the RL problem into a supervised learning problem because we know much more about how to deal with supervised learning problems when using DNNs.
|
https://ai.stackexchange.com
|
511,960
|
[
"https://electronics.stackexchange.com/questions/511960",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/258342/"
] |
In general, if we are working on a sequential circuit, say a Flip Flop (e.g. D Flip Flop)
The code we write for the always block part is:
<pre><code> always @(posedge clk or posedge reset)
begin
if (reset) begin
// Asynchronous reset when reset goes high
q <= 1'b0;
end else begin
// Assign D to Q on positive clock edge
q <= d;
end
end
</code></pre>
I am confused on the point - Why the line <code>if(clk)</code> is not used/written/introduced before <code>q <= d</code> in our always block.
Motivation:
Posedge transition corresponds to transition from:
<ul>
<li>0 to 1</li>
<li>x to 1</li>
<li>z to 1</li>
<li>0 to x</li>
<li>0 to z</li>
</ul>
So, why in most of the sequential codes, we don't confirm that the positive edge of the clock has appeared after the edge transition from low to high.
I've searched the forum for this topic but can't find a specific answer on this. I am a newbie and will appreciate your guidance.
|
You have a valid point. If we were being very careful we would want to know if the <code>clock</code> or <code>reset</code> was actually in the <code>X</code> state, and we would probably set <code>Q</code> to <code>X</code> if that was the case.
So why don't we do those checks? The <code>clock</code> and <code>reset</code> are signals that we design very carefully to ensure that they are solid digital signals, with fast transitions from 0 to 1. So, it is often safe to assume that they are never <code>X</code> for a significant length of time.
If you do want to be a careful designer, it is usually better to check for unknown values of <code>clock</code> and <code>reset</code> at their point of origin rather than everywhere they are used. Adding assertions for these signals in just one part of the design allows the simulations to be much for efficient than adding complex if/then/else checks in millions of flip flops.
|
It's implied that if the block triggered, and reset is <strong>not</strong> high, that clock rising edge must have triggered the always block (because the always block triggered either because posedge reset <strong>or</strong> posedge clk). Basically if reset is high, you want to behave like a reset no matter what in the always block, otherwise you want to behave like a flip flop.
|
https://electronics.stackexchange.com
|
159,180
|
[
"https://dba.stackexchange.com/questions/159180",
"https://dba.stackexchange.com",
"https://dba.stackexchange.com/users/88366/"
] |
I need to find out all the users who registered for my Postgresql 9.3 backed website in the <em>24 hour window 2 days ago</em>. Currently, I'm doing that via running the following queries, and then manually subtracting the difference:
<pre><code>select count (*) from auth_user where date_joined > now() - interval'24 hours';
select count (*) from auth_user where date_joined > now() - interval'48 hours';
</code></pre>
How do I do everything in the same SQL query, including the subtraction? Thanks in advance!
<hr>
If I do <code>select count (*) from auth_user where date_joined > (now() - interval'48 hours') - (now() - interval'24 hours');</code>, I get:
<blockquote>
No operator matches the given name and argument type(s). You might
need to add explicit type casts.
</blockquote>
|
By default only root has full access to everything on the database. However, it is very easy to set it so that every user has access to the data in the database.
The following is for a new user specifically: This is usually the more accepted way of granting privileges.<br>
1. On the root account I create a new database with table, and rows
<pre><code>mysql> create database db;
Query OK, 1 row affected (0.00 sec)
mysql> use db;
cDatabase changed
mysql> create table t1(k int);
Query OK, 0 rows affected (0.21 sec)
mysql> insert into t1 values (1),(2),(3);
Query OK, 3 rows affected (0.05 sec)
Records: 3 Duplicates: 0 Warnings: 0
mysql> select * from t1;
+------+
| k |
+------+
| 1 |
| 2 |
| 3 |
+------+
3 rows in set (0.00 sec)
</code></pre>
2. I then create another user called user1 and flush
<pre><code>mysql> CREATE USER 'newuser'@'localhost' IDENTIFIED BY 'password';
Query OK, 0 rows affected (0.00 sec)
mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)
</code></pre>
3. I exit the root account, and access the user1 (identified by 'password')
<pre><code>user@localhost:~$ mysql -u user1 -P 2227 test -p
Enter password:
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 458
Server version: 5.5.5-10.1.19-MariaDB Source distribution
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
</code></pre>
4. When I check which databases are available, it's clear that <code>db</code> is not there.
<pre><code>mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| test |
+--------------------+
2 rows in set (0.00 sec)
</code></pre>
5. In order to have <code>db</code> there, you would need to execute the following command on the root account:
<pre><code>mysql> GRANT ALL PRIVILEGES ON * . * TO 'user1'@'localhost';
Query OK, 0 rows affected (0.00 sec)
</code></pre>
As you can see, <code>user1</code> now has access to all the data in MySQL.
<pre><code>mysql> show databases;
+--------------------+
| Database |
+--------------------+
| db |
| information_schema |
| mysql |
| performance_schema |
| test |
+--------------------+
5 rows in set (0.00 sec)
</code></pre>
If you would like to set root privliages to all <strong>existing</strong> users, without iterating through the names, then execute the following (under root):
<pre><code>mysql> GRANT ALL PRIVILEGES ON * . * TO '.'@'localhost';
Query OK, 0 rows affected (0.00 sec)
</code></pre>
|
Mysql by default creates a single or multiple <code>root</code> user accounts (this depends on the mysql version) that are indeed superusers and have full access to all databases that you create on that mysql server. However, these accounts are initialised as superusers and you can remove their access rights based on your requirements. Or at least you can rename the superuser accounts from <code>root</code> to something else to make it even more harder to hack your mysql server. Any superuser account with all privileges granted to <code>*.*</code> and with grant option added will have full access to any newly created database on the mysql server.
Technically, it is possible not to have any superuser account in a mysql server, but it does not make much sense. Such accounts should be used as a last resort only, but they can be handy in case of an emergency.
|
https://dba.stackexchange.com
|
2,164,994
|
[
"https://math.stackexchange.com/questions/2164994",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/406632/"
] |
Is the ratio test for convergence applicable to the below series:
$$\sum_{n=1}^\infty \frac{n^3+1}{\sqrt[3]{n^{10} + n}}$$
I already know that the series diverge. I want to confirm if the ratio test is applicable or not?
|
Let's compute the ratio
$${a_{n+1}\over a_n}={(n+1)^3+1\over n^3+1}\cdot {\sqrt[3]{(n+1)^{10}+n+1}\over \sqrt[3]{n^{10}+n}}\sim{n^{1\over 3}\over(n+1)^{1\over3}}\to 1$$
We cannot conclude with the ratio test
|
If the limit of the ratio $$\lim_{n \to \infty} \frac{a_{n+1}}{a_n} = 1$$ Then the Ratio Test is <strong>Inconclusive</strong>. The test does not tell you anything about the series. The series may diverge or converge conditionally or absolutely.
<strong>As such, it would not be correct to say that the series <em>fails</em> the ratio test.</strong> It fails when the above limit is <em>strictly greater</em> than $1$.
|
https://math.stackexchange.com
|
16,681
|
[
"https://mathoverflow.net/questions/16681",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4302/"
] |
Is 'small enough' ellipse projected on a surface of a sphere convex? By ellipse I mean a set of points 'C' with a constant sum |AC| + |BC|, A and B are the centers. By 'small enough' I mean that the radii fits into 90 degrees (I think it is not convex once you make it large enough, though the limit is probably more like 180 degrees).
It seems to me that it is indeed convex, but is there some simple proof? The mathematics I tried to do usually ends up as f(x)=arccos(a(x)) + arccos(b(x)) and it isn't quite easy to prove that that the function has a reasonable shape when a(x) is decreasing and b(x) is increasing. Is there some easy proof I have overlooked?
By convex I mean that any shortest line connecting the points on the ellipse is 'inside' the ellipse (i.e. the distance |AX| + |XB| is smaller or equal then the distance defining the ellipse for any point X of the line).
Update: I think I eventually found a solution; triangle inequality works for these 'small enough' triangles. Geometrically the problem can be somewhat shuffled, in the end I have to prove that a triangle that is 'inside' another triangle is indeed smaller; triangle inequality combined with the way of computing a distance on a sphere will do the trick.
|
Yes it is. After central projection on the plane (Klein model for sphere) you obtain usual ellipse.
Also you can show it using triangle inequality. All proofs from euclidean plane works. For example this one:
Suppose $F_1$ and $F_2$ foci of the ellipse. Take any two points $A$ and $B$ inside and reflect $F_2$ with respect to the line $AB$. New point denote by $F_2'$.
Take any point $X$ on the segment $AB$. Suppose ray $F_1X$ intersect the segment $F_2'A$ (the case $F_2'B$ is the same) in the point $Y$. We have,
$$F_1X+F_2X=F_1X+F_2'X< F_1X+XY+YF_2'=F_1Y+YF_2'< F_1A+AY+YF_2'=F_1A+AF_2'$$
|
Note that cone over any convex figure (in particular ellipse) is convex.
And intersection of convex cone with a sphere centered as the vertex of cone is spherically convex.
|
https://mathoverflow.net
|
290,164
|
[
"https://physics.stackexchange.com/questions/290164",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/131596/"
] |
So imagine a tennis ball (or other object) attached to a string and spun in perfectly horizontal circular motion at a constant velocity. The two forces acting on the ball would be the force of tension (pulling the ball towards the center aka centripetal force) and the force of gravity (pulling the ball downwards aka weight).
In order for the ball to not succumb to gravity, its centripetal force must be greater, right?
So therefore, would the centripetal force have to be equivalent (or greater) than the force of gravity on the ball? Is the weight of the ball always going to be the minimal centripetal force required to keep the ball in motion?
On one level this seems almost intuitive (you want to balance the forces) but on the other I feel like gravity shouldn't even play a part in horizontal motion.
Any help in clearing up my befuddled brain is greatly appreciated.
|
<img src="https://i.stack.imgur.com/WOnyP.jpg" alt="enter image description here">
I will not be writing down everything, you should be able to write equation for FBD.
now
the tension in string is T
and weight of ball is balanced by Tcos∅
so
1. if we also consider the string, the motion is conical
2. here, the gravity does plays a part (it determines the shape of cone)
3. weight equals Tcos∅, to your answer, Tension must be greater than weight of ball.
|
You have to consider the two axes separately: lets call $x$ the horizontal plane, and $y$ the up and down (direction in which gravity acts). In the "free-body diagram" of the ball, there are also <strong>three</strong> forces to consider: 1) gravity, 2) centripetal, and <strong>3) the tension in the string</strong>. The tension in the string needs to balance the combination of gravity and centripetal acceleration.
The centripetal acceleration itself, has nothing to do with gravity, it only has to do with the ball's circular motion. Instead of the ball spinning nearly-horizontally, you can imagine spinning it ever so slowly---and it remaining nearly vertical. Gravity hasn't changes, only the centripetal acceleration, and thus tension in the string---via force balance in the $x$ direction.
Because there is gravity in the $y$ direction, <strong>the string can never be fully horizontal, because some amount of the string tension must act in the $+y$ direction to counteract gravity</strong>. Thus, only the $y$ component of the string-tension is determined by gravity, and this occurs regardless of the centripetal force.
|
https://physics.stackexchange.com
|
381,343
|
[
"https://softwareengineering.stackexchange.com/questions/381343",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/105003/"
] |
<strong>Disclaimer:</strong> There are some similar questions, but I didn't find any which touch specifically the problems you face while reviewing a large pull request.
<h2>Problem</h2>
I feel my code reviews could be done in a better way. I'm particularly talking about big code reviews with many changes across 20+ files.
It's pretty simple to catch obvious local code problems. Understanding whether the code meets business criteria is a different story though.
I have troubles following the thought process of the code author. It's pretty hard when the changes are numerous and spread across multiple files. I try to focus on the groups of files related to a particular piece of changes. Then review the groups one by one. Unfortunately the tool I use (Atlassian Bitbucket) is not very helpful. Whenever I visit a file, it gets marked as seen, even though it often turns out not to be related to the currently examined piece of changes. Not to mention that some files should be visited multiple times and their changes reviewed piece-by-piece. Also coming back to relevant files when you follow a bad path isn't easy.
<h2>Possible solutions, and why they don't work for me</h2>
Reviewing a pull request by commits often solves the size problems, but I don't like it since I'll frequently be looking at outdated changes.
Of course, creating smaller pull requests seems like a remedy, but it is what it is, sometimes you get a large pull request and it has to be reviewed.
You can also ignore logical aspect of the code as a whole, but it seems pretty risky, particularly when the code comes from an inexperienced programmer.
Using a better tool could be helpful, but I didn't find one.
<h2>Questions</h2>
<ul>
<li>Do you have similar problems with your code reviews? How do you face them?</li>
<li>Maybe you have better tools?</li>
</ul>
|
We had these problems and asking the question below has been working well for us:
Does the PR do <em>one thing</em> that can be merged and can be independently tested?
We try to break PRs by single responsibility (SR). After initial push back folks were surprised to find that even something <em>small albeit single</em> can be large.
The SR makes it really easy to review and also disseminates knowledge of the expected implementation.
This also allows for incremental refactors as more is added and PR turnaround time is drastically reduced!
I’d suggest breaking them up by SR if possible and see if it works for you. Takes some practice to do it that way.
|
Sometimes you can't avoid large pull requests - but you can be discerning as to who has what responsibility.
I treat pull requests as persuasive arguments. The Author is trying to convince me that the code should look and work this way.
As with any argument it should have a single clear idea. Its either:
<ul>
<li>a refactor, </li>
<li>an optimisation, </li>
<li>or new functionality.</li>
</ul>
If they are not being clear, then there is a pretty good chance that they do not understand it themselves. Open up the dialogue and help them break their argument down into its sub-arguments. If need be, its perfectly alright - even beneficial for them to recreate those commits, and offer more comprehensible and direct pull requests.
There will still be large pull requests, but with a clear argument its much easier to see what does not fit.
As for tooling, it is dependent on your organisation and process. BitBucket is a decent tool, whether its better or not depends on everything from your budget, hardware, requirements, through to your pre-existing processes, business rules, and various software adaptions you've already made to accommodate BitBucket. I'd start by looking through the documentation to see if the behaviour can be configured, maybe throwing out to the plugin community (or joining it by making a plugin to do that).
|
https://softwareengineering.stackexchange.com
|
314,694
|
[
"https://softwareengineering.stackexchange.com/questions/314694",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/59953/"
] |
Our agile sprint lasts three weeks. Say 40*3 = 120 hours. Our boss requires us must to log at least 8 hours every day. We use JIRA to record time. However my current story in the sprint estimated time is about 15 hours, of course it is not enough. Because I have to search online, discuss with team members and watch training videos etc. But even so I still can't log the whole time 120 hours. Be honestly, I am a quick problem solver, maybe I can use 40 hours to finish the project. After I finish my job, I can learn new technology related to the sprint project by myself.
The thing is if I logged more time on the agile project, the burn down chart would be ugly. If I logged less time to the project, my boss would be angry too, why do you spend much time on training rather than direct sprint work?
The terrible thing is that I heard that the performance would be related the time tracking.
So please advise me a right direction for time tracking.
|
Your company isn't following standard agile practices.
The developers should be the one estimating, in whatever units you use (hours or Story Points or something else). If you are doing the work, you should be involved in estimating it. In fact, everyone who is required to complete the Story needs to be involved in estimating to make sure that the size is appropriate for the estimated amount of work.
As someone who worked for a contractor, I do understand the need for tracking time. Typically, a Story that is worth more Story Points will take longer to complete due to the various factors. You should look at logging time against a project or activity, not necessarily a particular Story.
To fix these problems, you should first work on getting realistic estimates in place and using those estimates, along with historical data from previous Sprints, to plan future Sprints. The next step is to look at the overall process to make sure that the Development Team is able to commit to a reasonable amount of work for a Sprint and that, if the work is completed ahead of schedule, that additional work can be brought in. Finally, your Sprint Retrospectives should be used to talk about these problems and come up with methods to fix them.
|
While I agree with Thomas Owens answer, I think this needs a more strongly worded answer. The process you describe is completely missing some of the most important parts of agile management and these are the parts that managers should care the most about. (full disclosure: I'm a manager.)
In order to improve predictions about when work will be done, estimates should be done relative to other work that the team has done in the past. Then empirical data is used to predict how long the work will take. This can done using basic statistical methods and as there is more data, the picture of work throughput (averages, variance) become more clear. The managers should also then be able to detect trends as to whether throughput is increasing or decreasing in a statistically significant way.
The other issue here is the use of hours. There are some big problems with using hours as a metric:
<ol>
<li>What's an hour? Is it an hour of 'flow' coding? Is an hour of debugging a head-slapping mistake an 'hour'? If you estimate 8 hours a day, when do you look at your email? What if you help co-worker for 30 minutes. Do you subtract that half-hour from your hours for the day? How do I know that one developer's idea of an hour is the same and another's?</li>
<li>Who cares about how many hours something takes anyway? What's the precision of the deadline? Surely it's not down to the hour. If someone is trying to calling development that close, they've already failed.</li>
<li>No developer knows how many hours (regardless of how you define and hour of development) it takes to get things done. Ever notice that when you are really crushing it, you look up and the day has passed? Humans minds suck at measuring time even when they are focused on trying to do that.</li>
</ol>
I argue that the smallest proper unit for predicting development is a day. Unless you are working in a cave (bunker, casino), everyone can agree on when a day has passed. There's this large bright orb that passes over our heads with amazing regularity. An arguments can be made for weeks depending on the situation but they are less uniform (e.g. vacations and holidays.)
I am not saying that estimates should be given in days (although, it's a huge improvement over hours.) I'm saying days should be the unit of the predictions made by <em>the manager</em>. And to come full circle, when the manager (or scrum master) has the developers estimate in hours, they are usually delegating <strong>their</strong> job to the developers. And if all they do with that is check whether the developers hit that estimate, these estimates are worse than useless. They are actively disrupting the development process by occupying the developers with a BS game and likely lowering productivity by increasing stress levels.
|
https://softwareengineering.stackexchange.com
|
4,788
|
[
"https://cardano.stackexchange.com/questions/4788",
"https://cardano.stackexchange.com",
"https://cardano.stackexchange.com/users/1045/"
] |
I'm missing transaction prefix in order to get bech32 hash of transaction id using cardano-serialization-lib.
What prefix is used in transaction bech32 hash and how can I find/compute one?
<pre><code>const decodeUtxo =
(wasm: WasmT) =>
(encodedUtxo: string): WasmNamespace.TransactionUnspentOutput =>
wasm.TransactionUnspentOutput.from_bytes(Buffer.from(encodedUtxo, "hex"));
const collateralUtxos = (await wallet.getCollateral()).map(decodeUtxo(wasm));
const prefix = '' // how can I find this?
// to_bech32 function requires prefix
const collateralTxHashes = collateralUtxos.map((utxo) => utxo.input().transaction_id().to_bech32(prefix))
</code></pre>
|
In order to get transaction id using cardano-serialization-lib you should convert transaction id to bytes and convert them to hex string.
<pre><code>Buffer.from(utxo.input().transaction_id().to_bytes()).toString('hex')
</code></pre>
|
Try this one: utxo.input().transaction_id()
|
https://cardano.stackexchange.com
|
425,356
|
[
"https://softwareengineering.stackexchange.com/questions/425356",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/129731/"
] |
I have recently inherited a codebase which has a weird problem and I am trying to search for an extensible solution that can solve my issue.
Consider I have a model class that is used as a model to populate data in the UI which looks something like this.
<pre><code>public class Person
{
public string FirstName { get; set; }
public string LastName { get; set; }
public string Address { get; set; }
}
</code></pre>
All the information gets reterived via the Webservices and the above class object is used to do the required job.
But this model class serves another purpose and that is to pass information which is not received as a part of JSON response and due to this, there are new properties introduced in the model object which are JSON Ignored, so consider in some particular screen I need to know if the person is a new person, since this property is not received from JSON, the model object now looks something like this
<pre><code>public class Person
{
public string FirstName { get; set; }
public string LastName { get; set; }
public string Address { get; set; }
[JsonIgnore]
public bool IsNewPerson { get; set; }
}
</code></pre>
Now, the problem is this class is getting uglier as newer properties are getting introduced due to the requirements and these properties are not a part of the JSON response and only contribute to the UI and hence the JSONIgnore attribute.
My question is, What's the ideal way to handle such a scenario where the model keeps changing? As per the OCP the class shouldn't be modified every time. How do I extend this model class?
Is it good practice to create a interface for model class? is it good to create a basemodel class?
Can anyone suggest a good design for this?
|
<blockquote>
How to model classes that can be <strong>extendable</strong>?
</blockquote>
In the field of software development, extensibility specifically refers to inheritance, and inheritance is not the correct approach here.
<blockquote>
The above class object is used to do the required job. But this model class serves <strong>another purpose</strong> and that is to pass information which is not received as a part of JSON response.
</blockquote>
"Another purpose" is an SRP violation. Different purposes should be represented individually, specifically so that a change to one does not inherently cause a change in the other.
The explanation in your question is a bit ambiguous, but if I understand you correctly, your application is a man in the middle, i.e. the second <code>Person</code> class you show is the data you send back to your end user, and the first <code>Person</code> class is the data you receive from an external web service?
If that is the case, then you should <strong>always</strong> separate these two DTOs, as they represent the exchange between different sets of layers (i.e. external service => you, and you => end user).
Which means that your code should contain several bits. The DTO from the external web service:
<pre><code>public class Person
{
public string FirstName { get; set; }
public string LastName { get; set; }
public string Address { get; set; }
}
</code></pre>
Your own DTO which you return to your end user:
<pre><code>public class PersonModel
{
public string FirstName { get; set; }
public string LastName { get; set; }
public string Address { get; set; }
public bool IsNewPerson { get; set; }
}
</code></pre>
Note that I'm using a different name for clarity's sake here. You can choose your own name or even reuse the same one (if they're in different namespaces).
And then you need some mapping logic. This mapping logic is the exact spot where you can inject your additional <code>IsNewPerson</code> value. Note I'm just inventing some example logic here to showcase how you could cleanly structure this.
<pre><code>public PersonModel GetPersonById(int id)
{
var person = _externalService.GetPerson(id);
var isNewPerson = _magic8Ball.TellMeTrueOrFalse();
return MapToPersonModel(person, isNewPerson);
}
private PersonModel MapToPersonModel(Person p, bool isNewPerson)
{
return new PersonModel()
{
FirstName = p.FirstName,
LastName = p.LastName,
Address = p.Address,
IsNewPerson = isNewPerson,
};
}
</code></pre>
Obviously, replace <code>_magic8Ball.TellMeTrueOrFalse()</code> with your custom logic.
<hr />
If I misunderstood, and both purposes you talk about are both interactions between your application and the end user, there are a few more options you have.
You could still separate the classes like you did above, and then each endpoint simply uses their own class to achieve their result, without conflict.
Or you could promote reusability for the person-related field using <strong>composition</strong> (over inheritance). For example:
<pre><code>// Purpose 1
public class Person
{
public string FirstName { get; set; }
public string LastName { get; set; }
public string Address { get; set; }
}
// Purpose 2
public class CreatedPerson
{
public Person Person { get; set; }
public bool IsNewPerson { get; set; }
}
</code></pre>
Again, the <code>CreatedPerson</code> name is a generic example. A better name depends on what this "purpose 2" is, and you can choose a more apt name.
A third option is to consider that you might have this same <code>IsNewXXX</code> type of field for many different entities, not just person, so you could consider an even more resuable generic class:
<pre><code>public class Person
{
public string FirstName { get; set; }
public string LastName { get; set; }
public string Address { get; set; }
}
public class CreatedObject<T>
{
public T Value { get; set; }
public bool IsNew { get; set; }
}
</code></pre>
This <code>CreatedObject<T></code> makes it so that you don't have to create this same kind of class for every entity you expose. For example, if you expose <code>Person</code>, <code>Animal</code> and <code>Car</code>, now you don't have to write custom <code>CreatedPerson</code>, <code>CreatedAnimal</code> and <code>CreatedCar</code> classes, you can simply use <code>CreatedObject<T></code> with the appropriate generic type (<code>CreatedObject<Person></code>, <code>CreatedObject<Animal></code>, <code>CreatedObject<Car></code>)
Which of these approaches is the most appropriate heavily relies on context which your question is somewhat light on.
But the main takeaway here is <strong>don't use the same class for more than one purpose</strong>.
|
One alternative is to add <em>behavior</em> to objects.
You said the object "serves another purpose" in addition to be used to communicate json messages. So add both "purposes" (i.e. behavior) explicitly into the object instead of trying to "annotate your way out" the problem.
Add <code>toApiMessage()</code> and the other "purpose" as explicit methods. Adding all the behavior into the object is the only way to make it maintainable. That is, to be able to change the "data" and the "behavior" it influences in a single place.
It is also coincidentally the object-oriented way.
|
https://softwareengineering.stackexchange.com
|
136,070
|
[
"https://softwareengineering.stackexchange.com/questions/136070",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/47939/"
] |
Let's say you have some automated processes that generally go through the following states;
scheduled - initiated - validating - executing - completed
On top of that these processes can prematurely end because of an error or explicit user cancellation.
My first impulse is to simply add <em>error</em> and <em>cancelled</em> to the list of possible status values, but I was wondering about the (conceptual) advantages of separating <em>result</em> from <em>status</em> (even though it seems to me that one might argue that error and cancelled are also simply different states than the <em>completed</em> state).
|
The state you assign to your processes should reflect what your program (or the users, if you are just visualizing states) are going to do with this information. So do you have the requirement to evaluate / show the state of your processes as long as they are running and showing no error? Then separate <em>result</em> from <em>status</em>. If you just need the status when a process has ended, then don't separate.
You should not model anything just for the sake of modeling. Better check your requirements. And if you are unsure about what you might need later, choose the smallest, most simple solution for the requirements you know for sure. If you are just "guessing", in 90% of all cases you will guess wrong, so you will have to change your model later anyway.
|
<blockquote>
but I was wondering about the (conceptual) advantages of separating result from status (even though it seems to me that one might argue that error and cancelled are also simply different states than the completed state).
</blockquote>
There is a great advantage in detailing progress and identifying failiure points (within reasonable limits) as in your case.
I think the confusion stems from the terms 'status' and 'state' - We must qualify those terms. So for example, "Task Status", even that is not very precise, so we may want to use "Task Execution Status", however this is wrong because you already have an execute step. We may use the name "Task Processing Status" and the values of: "initiated - validating - executing - completed" make perfect sense. Indeed we could add 'Cancelled' to the list. However, 'Error' does not answer a question like: What is the <em>Task Processing Status</em> very well. It looks like 'Error' is a sub-status of <em>Completed</em>. So what do we do? We could rename <em>Completed</em> to be <em>Completed OK</em> and then we could add <em>Completed With Error</em> to the list. So the final list of the <em>Task Processing Status</em> values are:
<ul>
<li>Initiated, </li>
<li>Validated, </li>
<li>Executing, </li>
<li>Cancelled, </li>
<li>Completed OK, </li>
<li>Completed With Error</li>
</ul>
Edit: now the above list still requires some work. The first 4 items don't have the word 'OK' in them. So if it better be there to match the "Completed OK" state. The other thing is that the first 4 items don't have a "with error" - What doe that mean? What happens when "Executing" abnormally ends - Does that call for a new state of "Executed with Error"? At this point, more input and analysis may be required.
|
https://softwareengineering.stackexchange.com
|
8,957
|
[
"https://mathoverflow.net/questions/8957",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2510/"
] |
Several times I've heard the claim that any Lie group $G$ has trivial second fundamental group $\pi_2(G)$, but I have never actually come across a proof of this fact. Is there a nice argument, perhaps like a more clever version of the proof that $\pi_1(G)$ must be abelian?
|
I don't know of anything as bare hands as the proof that <span class="math-container">$\pi_1(G)$</span> must be abelian, but here's a sketch proof I know (which can be found in Milnor's Morse Theory book. Plus, as an added bonus, one learns that <span class="math-container">$\pi_3(G)$</span> has no torsion!):
First, (big theorem): Every (connected) Lie group deformation retracts onto its maximal compact subgroup (which is, I believe, unique up to conjugacy). Hence, we may as well focus on compact Lie groups.
Let <span class="math-container">$PG = \{ f:[0,1]\rightarrow G | f(0) = e\}$</span> (I'm assuming everything is continuous.). Note that <span class="math-container">$PG$</span> is contractible (the picture is that of sucking spaghetti into one's mouth). The projection map <span class="math-container">$\pi:PG\rightarrow G$</span> given by <span class="math-container">$\pi(f) = f(1)$</span> has homotopy inverse <span class="math-container">$\Omega G = $</span>Loop space of G = <span class="math-container">$\{f\in PG | f(1) = e \}$</span>.
Thus, one gets a fibration <span class="math-container">$\Omega G\rightarrow PG\rightarrow G$</span> with <span class="math-container">$PG$</span> contractible. From the long exact sequence of homotopy groups associated to a fibration, it follows that <span class="math-container">$\pi_k(G) = \pi_{k-1}\Omega G$</span>
Hence, we need only show that <span class="math-container">$\pi_{1}(\Omega G)$</span> is trivial. This is where the Morse theory comes in. Equip <span class="math-container">$G$</span> with a biinvariant metric (which exists since <span class="math-container">$G$</span> is compact). Then, following Milnor, we can approximate the space <span class="math-container">$\Omega G$</span> by a nice (open) subset <span class="math-container">$S$</span> of <span class="math-container">$G\times\cdots \times G$</span> by approximating paths by broken geodesics. Short enough geodesics are uniquely defined by their end points, so the ends points of the broken geodesics correspond to the points in <span class="math-container">$S$</span>. It is a fact that computing low (all?...I forget)* <span class="math-container">$\pi_k(\Omega G)$</span> is the same as computing those of <span class="math-container">$S$</span>.
Now, consider the energy functional <span class="math-container">$E$</span> on <span class="math-container">$S$</span> defined by integrating <span class="math-container">$|\gamma|^2$</span> along the entire curve <span class="math-container">$\gamma$</span>. This is a Morse function and the critical points are precisely the geodesics**. The index of E at a geodesic <span class="math-container">$\gamma$</span> is, by the Morse Index Lemma, the same as the index of <span class="math-container">$\gamma$</span> as a geodesic in <span class="math-container">$G$</span>. Now, the kicker is that geodesics on a Lie group are very easy to work with - it's pretty straight forward to show that the conjugate points of any geodesic have even index.
But this implies that the index at all critical points is even. And now THIS implies that <span class="math-container">$S$</span> has the homotopy type of a CW complex with only even cells involved. It follows immediately that <span class="math-container">$\pi_1(S) = 0$</span> and that <span class="math-container">$H_2(S)$</span> is free (<span class="math-container">$H_2(S) = \mathbb{Z}^t$</span> for some <span class="math-container">$t$</span>).
Quoting the Hurewicz theorem, this implies <span class="math-container">$\pi_2(S)$</span> is <span class="math-container">$\mathbb{Z}^t$</span>.
By the above comments, this gives us both <span class="math-container">$\pi_1(\Omega G) = 0$</span> and <span class="math-container">$\pi_2(\Omega G) = \mathbb{Z}^t$</span>, from which it follows that <span class="math-container">$\pi_2(G) = 0$</span> and <span class="math-container">$\pi_3(G) = \mathbb{Z}^t$</span>.
Incidentally, the number <span class="math-container">$t$</span> can be computed as follows. The universal cover <span class="math-container">$\tilde{G}$</span> of <span class="math-container">$G$</span> is a Lie group in a natural way. It is isomorphic (as a manifold) to a product <span class="math-container">$H\times \mathbb{R}^n$</span> where <span class="math-container">$H$</span> is a compact simply connected group.
H splits isomorphically as a product into pieces (all of which have been classified). The number of such pieces is <span class="math-container">$t$</span>.
(edits)
*- it's only the low ones, not "all", but one can take better and better approximations to get as many "low" k as one wishes.
**- I mean CLOSED geodesics here
|
The elementary proof that $\pi_1$ is abelian applies more generally to H-spaces (spaces $X$ with a continuous multiplication map $X \times X \to X$ having a 2-sided identity element) without any assumption of finite dimensionality, but infinite-dimensional H-spaces can have nontrivial $\pi_2$, for example $CP^\infty$ (which can be replaced by a homotopy equivalent topological group if one wants, as Milnor showed). Thus finite-dimensionality is essential, so any proof would have to be significantly less elementary than for the $\pi_1$ statement. It is a rather deep theorem of W.Browder (in the 1961 Annals) that $\pi_2$ of a finite-dimensional H-space is trivial.
Hopf's theorem that a finite-dimensional H-space (with finitely-generated homology groups) has the rational homology of a product of odd-dimensional spheres implies that $\pi_2$ is finite, but the argument doesn't work for mod p homology so one can't rule out torsion in $\pi_2$ so easily. It's not true that a simply-connected Lie group is homotopy equivalent to a product of odd-dimensional spheres. For example the mod 2 cohomology ring of Spin(n) is not an exterior algebra when n is sufficiently large. For SU(n) the cohomology ring isn't enough to distinguish it from a product of spheres, but if SU(n) were homotopy equivalent to a product of odd-dimensional spheres this would imply that all odd-dimensional spheres were H-spaces (since a retract of an H-space is an H-space) but this is not true by the Hopf invariant one theorem. There are probably more elementary arguments for this.
|
https://mathoverflow.net
|
25,181
|
[
"https://electronics.stackexchange.com/questions/25181",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/7575/"
] |
I am curious if there are practical differences between a DC power supply based on a half wave rectifier or a full wave rectifier.
I mean I have a few small DC power supply units which should give 12V 0.1A each. They all have a transformer 240V->18V, then 1 diode or 4 diodes, then 78L12 (0.1A regulator) and one or two capacitors (typically 220uF or 470uF).
My question is if the power supply can give a good quality DC voltage with just a half wave rectifier (a single diode) when a 470uF capacitor and 78L12 is added, or if bridge rectifier (4 diodes) is better.
I also have one old 12V 0.2A power supply based on a Zener diode instead of 7812 regulator. It also has 18V going to just a single diode, then 33R resistor which limits current to 0.2 Amp, then Zener diode parallel with a 1000uF capacitor. Again: Would it be better to have 4 diodes there, or is the half wave rectification good enough here thanks to the 1000uF capacitor?
(All my power supplies work well, I am just curious "why" and "how" these things work.)
<strong>Update:</strong>
I found two more interesting information:
<ol>
<li>Capacitor should be approximately 500 uF for each .1 Amper of output (or more). This applies to full wave rectifier. Since I saw the same values in half wave rectifiers, it isn't enough and they are bad design.</li>
<li>4-diode rectification cannot be used when we want to have a combined 5V/12V output (or any other two voltages) with a simple transformer, because it can't provide a common ground for two different circuits. (A more complicated real example: I have got a power supply with four output wires from transformer -7/0/+7/+18 Volt. Then it uses 2-diode rectification to get full wave 7V output, and 1-diode rectification to get half wave 18V output. The 18V line can't be "upgraded" to 4-diode rectification here.)</li>
</ol>
|
Either can work correctly if designed properly. If you have a dumb rectifier supply feeding a 7805, then all the rectifier part needs to do is guarantee the minimum input voltage to the 7805 is met.
The problem is that such a power supply only charges up the input cap at the line cycle peaks, then the 7805 will drain it between the peaks. This means the cap needs to be big enough to still supply the minimum 7805 input voltage at the worst case current drain for the maximum time between the peaks.
The advantage of a full wave rectifier is that both the positive and negative peaks are used. This means the cap is charged up twice as often. Since the maximum time since the last peak is less, the cap can be less to support the same maximum current draw. The downside of a full wave rectifier is that it takes 4 diodes instead of 1, and one more diode drop of voltage is lost. Diodes are cheap and small, so most of the time a full wave rectifier makes more sense. Another way to make a full wave rectifier is with a center tapped transformer secondary. The center is connected to ground and there is one diode from each end to the raw positive supply. This full wave rectifies with only one diode drop in the path, but requires a heavier and more expensive transformer.
A advantage of a half wave rectifier is that one side of the AC input can be directly connected to the same ground as the DC output. That doesn't matter when the AC input is a transformer secondary, but it can be a issue if the AC is already ground-referenced.
|
Simplified explanation:
An ideal half-wave rectifier only "uses" half of the AC waveform (hence the name half-wave).
<img src="https://i.stack.imgur.com/3Vz4F.png" alt="half-wave rectifier from Wikipedia">
An ideal full-wave bridge rectifier will use the entire AC waveform.
<img src="https://i.stack.imgur.com/pfUM2.png" alt="full-wave rectifier from Wikipedia">
An ideal full-wave rectifier (with a center-tapped transformer) will also use the entire AC waveform.
<img src="https://i.stack.imgur.com/POjc3.png" alt="another full-wave rectifier from Wikipedia">
You can see that that for the half-wave rectifier, every second AC cycle is skipped leaving a gap in the output waveform. For the full-wave rectifier, since the whole waveform is used, the gap is gone (the effective output frequency is doubled).
If these waveforms are applied to a capacitor, you can see pretty clearly that for the half-wave rectifier, in order to maintain clean DC, the capacitor would need to be large enough to hold up the voltage during that big gap. For the full-wave rectifier, since there are more 'peaks', the capacitor can be smaller than for a half-wave rectifier at the same power level.
To your question, a properly-designed half-wave rectifier should have a sufficiently-large capacitor to maintain regulation despite only using half of the AC waveform, so the regulation should be just fine. There's no need to 'upgrade' the circuit with a bridge.
|
https://electronics.stackexchange.com
|
29,258
|
[
"https://mechanics.stackexchange.com/questions/29258",
"https://mechanics.stackexchange.com",
"https://mechanics.stackexchange.com/users/16286/"
] |
I know this depends on a lot of factors but I heard a revolution can happen in
0.02 second is this the case? I can't even stop my stop watch that fast?
|
This is a math-only question, and no one has explained the formulas. Also I like the idea of this simple question having multiple answers so...
RPM is Revolutions Per Minute, but we want a time in seconds. When you hear that word "per" it means division. So, what we have is <em>4000 revolutions / 1 minute</em> (where / is the division symbol). This easily converts to <em>4000 revolutions / 60 seconds</em>. The result of that (4000/60=) is <em>66.66 revolutions / 1 second</em>.
Now we have revolutions per second, but you want seconds per revolution (what we have, flipped upside down). So, simply enough, we flip the whole thing upside down to get <em>1 second / 66.66 revolutions</em>, and the result is (1/66.66=) <em>0.015 seconds / 1 revolution</em>.
Finally, flip that around grammatically and we get "1 revolution takes 0.015 seconds". Which is 15 milliseconds, but I'll leave that conversion as an exercise for the reader.
|
In a car rpm are very often shown. From experience you might know that for a normal car it varies between 1000 and 3000 rpm for normal use cases.
3000 rpm = 50 rotations per second = 1 rotation per 0.02 seconds.
|
https://mechanics.stackexchange.com
|
525,432
|
[
"https://electronics.stackexchange.com/questions/525432",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/264774/"
] |
I'm planning on doing some modifications to my Dunlop GCB95 Cry Baby wah pedal. As far as I can tell, the hookup wire in the pedal is 22AWG (it's difficult to read what's printed on the side). The smallest gauge of hookup wire my local hardware store had was 18AWG, and I decided to buy lengths of 3ft. in a few different colors. My thought process was that I should be able to use any hookup wire that is 22AWG or bigger. However, before I do anything to the pedal, I want to be sure that I won't run into any issues. Would it be okay to use the 18AWG wire, or do I need to get 22AWG wire?
|
Very few circuits are designed in such a way as to require a <em>minimum</em> resistance or inductance of the interconnecting wire. Audio folks sometimes do some odd things, but probably not that. So while we can't technically know without examining the specific usage, electrically this will probably be fine.
The more practical issues you might run into would be related to the larger size and stiffness in getting things in place, and the greater heat perhaps needed when soldering (which might flow to connected things causing collateral damage to insulation, plastics, etc). In some cases a larger wire might create a mechanical strain leading to future breakage of a solder join or component, but that can be as much about the details of installation as the actual wire size.
Also consider the relative merits of solid and stranded wire - the latter often being preferred.
Ultimately the actual greatest challenges would probably be related to any inexperience in doing such work. If this audio gear is valuable, exercise extreme care, as it's quite easy to accidentally damage adjacent areas. Doing one's own work is of course a noble path, but sometimes it makes sense to get some initial practice on more "throwaway" items.
Sometimes you can scrounge suitable project wire out of things being disposed, too. Though beware that some cables use specialized wire types which aren't useful for general hookup, for example they may have strands twisted with a string, or be thin ribbons wound around one. But if it has the <em>hand</em> of a nice interconnect wire, and on stripping appears to be just copper conductor, and takes solder without the insulation making too much of an acrid mess, it can be great.
|
Thicker wire - higher ampacity rating and lower resistance. Electrically it is ok to use 18AWG instead of 22AWG.
|
https://electronics.stackexchange.com
|
225,945
|
[
"https://electronics.stackexchange.com/questions/225945",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/105487/"
] |
I live in an apartment and do not have access to the circuit breaker panel. In the summer I run two AC units, and have inadvertently tripped the breaker before while running my vacuum cleaner. This caused a big hassle, as I had to wait for the super to get home so he could go down and flip the breaker back. I'd like to avoid this ever happening again, as 1/3 of my apartment was without power for about 6 hours that day.
When it happened, I made note of which outlets went out, so I now know all the outlets on that circuit. I'd like to be able to differentiate between the outlets on the remaining two circuits, so that I can space my high-draw appliances out evenly and avoid future issues.
What I'm looking for is a device (or pair of devices) I can purchase that will allow me to check two outlets and see if they are on the same circuit. I've seen circuit breaker testers that come with one device which you plug into an outlet, but the other end requires that you have breaker access and wand the device over the breaker or something similar to check. As I mentioned, I do not have access to the breaker, so this is not an option. Ideally there would be a pair that I can plug into two separate outlets, and have one of them tell me whether they are connected or not. My searches online haven't yielded any results on a device like this, so I'm curious if there's any way to do it from within the apartment, without breaker access and definitely without tripping any breakers.
Not accepting answers that say "turn off your AC before running the vacuum" or any such thing. Obviously in the future I will make sure that I turn them off, but this is more a question of load-balancing my devices which do have to run simultaneously.
|
There are a few ways. Let's assume you've already mapped circuit 1 and exclude that from these tests.
<hr>
This method will work if there's a long wiring run back to the service panel (breaker box). Remove every big load from every circuit in the house. Nothing in the house but wall-warts, vampire loads, and clocks.
Now pick any receptacle, designate this circuit 2. Plug a power strip into one outlet, and your biggest load in the other. Put even more stuff on the power strip until you've loaded that circuit as much as you dare. Measure the voltage between <code>hot</code> and <code>neutral</code> as you add loads; ideally it is starting to sag a volt or two.
Now measure the voltage between <code>neutral</code> and <code>ground</code> on every outlet. We're looking for a small difference of less than 1 volt. Those are all on the same circuit - the one you loaded to max. You will notice that some outlets have slightly higher voltage than others because they are at different places along the cable. What they will have in common is their voltage will restore to normal when the circuit is unloaded.
Now turn your loads off ASAP. Breakers can take up to 20 minutes to trip if they are only slightly overloaded.
Why does this work? If you load a circuit heavily, its voltage will sag very slightly, due to our dear friend Mr. Ohm. But <code>hot</code> and <code>neutral</code> wires are the same size. That means <em>both of them will sag about equally</em> - <code>hot</code> will <em>drop</em> in voltage and <code>neutral</code> will <em>rise</em> in voltage. <code>Ground</code> isn't sagging: like the unloaded indicator beam on a torque wrench, it's a reference point. If ground tracks precisely with neutral, you may have a bootlegged ground.
<hr>
This method will work only if the two remaining circuits are on -opposite- poles in the service panel (assuming the usual 120/240 split phase service common to North America). All things being equal there's a 50/50 chance this is true.
Pick any outlet, designate this circuit 2. Plug a long extension cord into it. Bring the extension cord near every other outlet, and measure the voltage between <code>hot</code> (on the extension cord) and <code>hot</code> (on the outlet). Some will be 0 volts. Others MAY be 240 volts. If they are, you have identified your third circuit. If you get 120 volts, something is wrong - e.g. <code>hot</code> and <code>neutral</code> reversed. If you get 0 volts the two outlets are on the same circuit.
|
You can use a sensitive magnetic field sensor to pick up the field around the heavily loaded wires to accomplish what you intended to do.
As it is suggested in the other answer, unload all loads from all outlets. Pick one of the outlet and load it as heavy as possible with the heaviest load you have, perhaps with a 2kW electric kettle.
Now use the field tracer to track the wires and follow it to all the other outlets it is connected to.
If you don't have a field tracer, you actually can make one cheaply. I did mine by modifying a old, junked portable audio cassette player, classically known as the walkman. Rip off its playback head and replace it with the motor solenoid of the wall clock. Stick the solenoid on the walkman and solder its leads to the lead wires which originally connected to the playback head.
After plugging in two AA batteries and the 3.5 mm stereo headphone to the walkman, you will have a perfect portable field sensor. After few experiment with solenoid placement and orientation over the current carrying wires, the field tracer's sensitivity can be well recognized and be used in very predictive manner. Just listen to the loudest mains frequency noise, 50 or 60 Hertz depends on where you live.
<strong>edit</strong>
The walkman starts to act as the AC magnetic field tracer when it's PLAY button is depressed, but during this time it's internal motor mechanism starts to run too. This motor leads must be cut in order to avoid this motor noise from being picked up by the sensor solenoid.
|
https://electronics.stackexchange.com
|
263,416
|
[
"https://security.stackexchange.com/questions/263416",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/276794/"
] |
When i use only a passphrase in luks for my whole system partition encryption,
so i need to insert a password to decrypt my system partition to boot up my os,
is this unsecure and can it be cracked with bruteforce if someone stole the laptop?
What is the best way to do this, with a keyfile and store the keyfile to an usb stick,
do i have to consider something about the length?
Can i load this keyfile with grub from my usb stick and how?
<strong>UPDATE | If there are to much questions, i can splitt them!</strong>
I read now that i need to store a header file to.
So i need to store 2 files now?
Can i restore a password from this header file or decrypt the partition only with this header file?
|
<blockquote>
running <code>npm i ...</code> not long after <code>pass my-password</code> allows a malicious package to steal my entire password store
</blockquote>
Yes, but not just that. Running <code>npm i ...</code> <strong>at any time before</strong> <code>pass my-password</code> allows a malicious package to steal your entire password store. A malicious package can inject code somewhere (for example the <code>pass</code> executable or a library that it uses) so that whenever sensitive data becomes accessible, the malicious entity will have access to it as well.
As soon as an environment is compromised, it's game over.
The only solution is to run untrusted code in an isolated environment.
(Mind you, why are you installing development packages you don't trust? Are they somehow secure enough for the users of the product you're developing, but not for yourself?)
|
That’s only a partial solution to your problem, but I do use a hardware token for my GPG key. Whenever the hardware token is unplugged, no malicious code can use my GPG key.
Moreover, the hardware token I am using is a Yubikey that can me configured so that it requests to be physically touched before any decryption/signature/authentication operation is performed. When configured that way, if any malicious code tries to use your key, you may notice your hardware token is blinking and it won’t perform the operation unless you touch it.
Anyway, I consider this is only in-depth protection. I fully agree with Gilles’ answer that you should better not run untrusted code in a non-isolated environment.
|
https://security.stackexchange.com
|
123,139
|
[
"https://electronics.stackexchange.com/questions/123139",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/50268/"
] |
why only there are only 3 companies allowed to make processor "intel - AMD -apple" why others can't make one ..
i know that making processor is difficult and needs a huge techno , but what about google or samsung aren't they have the qualified techno ? so why they use others' processors
|
Firstly, there are plenty of other Processor manufacturers, ARM noted above are probably bigger than the ones you named by processors shipped, but others, such as VIA who also make processors for the same X86 and X86-64 instruction sets used in Intel and AMD 32 and 64-bit processors.
Processor companies require architecture volume to persuade third parties to support a platform.
Imagine if a new chip manufacturer with their own architecture and instruction set, came to market: 1st they'd either need to make their own System-on-Chip or motherboard, 2nd they'd need to code all of the basic software, peripherals and in the first instance either code or port an operating system to their brand new hardware to convince others of its potential.
In short, unless someone could demonstrate exceptional power usage, or alternatively a significant increase in the performance/cost ratio, they would struggle to convince partners in hardware, software and peripherals to support their new platform.
|
Any company that feels that they can be competitive in the market (or even those that feel the need to lose money by the bushel) is free to make a processor.
|
https://electronics.stackexchange.com
|
559,232
|
[
"https://physics.stackexchange.com/questions/559232",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
] |
What is the difference between <span class="math-container">$T^{\mu}{}_{\nu}$</span> and <span class="math-container">$T_{\nu}{}^{\mu}$</span> where <span class="math-container">$T$</span> is a tensor?
|
In some sense, it is basically a transpose. To be more specific, to get from the first to the second, you can raise the <span class="math-container">$\nu$</span> index using the inverse metric tensor
<span class="math-container">$$T^{\nu\mu}=g^{\nu\alpha}T^{\ \ \mu}_{\alpha}$$</span>
Then transpose <span class="math-container">$T$</span>
<span class="math-container">$$T^{\mu\nu}=(T^{\nu\mu})^T$$</span>
and then lower the <span class="math-container">$\nu$</span> index with the metric tensor
<span class="math-container">$$T^{\mu}_{\ \ \ \nu}=g_{\nu\alpha}T^{\mu\alpha}$$</span>
So you see that if <span class="math-container">$T$</span> was a symmetric tensor (<span class="math-container">$T^{\mu\nu}=T^{\nu\mu}$</span>) then they are the same thing, but in general, the transposition step is what makes them differ.
|
If <span class="math-container">$T$</span> is defined as a <span class="math-container">$(1,1)$</span> tensor, the order of the indices is unimportant as they "live" in different spaces. Meaning that one transforms as vectors do and the other as covectors do.
However, sometimes <span class="math-container">$T$</span> could be originally e <span class="math-container">$(2,0)$</span> tensor (or <span class="math-container">$(0,2)$</span>, let's consider the former for concreteness). Then the tensor with one index up and one down is defined as
<span class="math-container">$$
T^{\mu}_{\phantom{\mu\,}\nu} = \eta^{\mu\rho}\, T_{\rho\nu}\,,\qquad
T_{\mu}^{\phantom{\mu}\nu} = \eta^{\rho\nu}\, T_{\mu\rho}
\,.
$$</span>
So, if <span class="math-container">$T$</span> is not symmetric, the order matters. Keep in mind though that it is not meaningful to symmetrize or antisymmetrize indices when they are not at the same height (i.e. both down or both up).
|
https://physics.stackexchange.com
|
743,154
|
[
"https://math.stackexchange.com/questions/743154",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/141068/"
] |
Suppose you have 2 fair decks, let's call them deck A and deck B. Now take two cards from deck A and add them to the deck B, and shuffle; thus you now have a deck with 54 cards. Now draw 2 cards from deck B, what's the probability that you draw an ace?
EDIT: Let's say you draw exactly 1 ace.
This is kind of an arbitrary question I made up, I don't care so much about the specific answer but about the process. I'm really trying to understand conditional probability and I feel like if I can answer a question like this I'll be on my way.
My guess is that first we calculate the odds that one or both of the cards from deck A were aces. Then you would weigh the possibilities of two aces having been transferred, one ace having been transferred, or no aces having been transfered across drawing from deck B?
Can someone point me in the right direction?
Also as a complete aside, how do you calculate the odds of drawing exactly one ace when drawing two cards from a deck of 52? I've answered the question using complelemntary probability of drawing 2 aces, and no aces and then subtracting that from 1 but is there a "direct" way of doing it? i feel that I understand better when it's not through complements
|
So suppose I have three disjoint sets of vertices: $\{v_{1}\} \cup \{v_{2}\} \cup V(C_{3})$. Here, $\{v_{1} \} \cup \{v_{2}\}$ is a forest which does not span, while $\{v_{1}\} \cup \{v_{2}\} \cup (C_{3} - e)$ is a spanning forest, for $e \in E(C_{3})$.
|
A forest is subset of undirected graph and is a collection of trees across its connected components.
A spanning forest is subset of undirected graph and is a collection of spanning trees across its connected components.
To clarify, lets use a simple example. Say we have an undirected graph A that has two acyclic components ( spanning tree A1, and spanning tree A2) and one cyclic component A3. In this case collection of A1 and A2 will comprise a spanning forest.
If we make a modification to component A3 and make it acyclic(i.e. it will become a spanning tree ) then we can have a spanning forest comprising of the collection of A1, A2 and A3
|
https://math.stackexchange.com
|
29,149
|
[
"https://mathoverflow.net/questions/29149",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3969/"
] |
Given a regular tessellation, i.e. either a platonic solid (a tessellation of the sphere), the tessellation of the euclidean plane by squares or by regular hexagons, or a regular tessellation of the hyperbolic plane.
One can consider its isometry group $G$. It acts on the set of all faces $F$. I want to define a symmetric coloring of the tessellation as a surjective map from $c:F\rightarrow C$ to a finite set of colors $C$, such that for each group element $G$ there is a permutation $p_g$ of the colors, such that $c(gx)=p_g\circ c(x)$. ($p:G\rightarrow $Sym$(C)$ is a group homomorphism).
Examples for such colorings are the trivial coloring $c:F\rightarrow \{1\}$ or the coloring of the plane as an infinite chessboard.
The only nontrivial symmetric colorings of the tetrahedron, is the one, that assigns a different color to each face. For the other platonic solids there are also those colorings that assign the same colors only to opposite faces.
So my question is: Does every regular tessellation of the hyperbolic plane admit a nontrivial symmetric coloring?
I wanted to write a computer program that visualizes those tessellations, but I didn't find a good strategy which colors should be used. So I came up with this question.
|
The answer is yes. Moreover, for every two different faces $A$ and $B$ there is a symmetric coloring assigning different colors to $A$ and $B$.
The isometry group $G$ is residually finite, hence here is a normal finite index subgroup $H$ of $G$ that contains no elements (except the identity) sending $A$ to itself or to $B$. Assign a unique color to each orbit of $H$.
The coloring symmetry condition is essentially he following: if $f\in G$ and faces $X$ and $Y$ are of the same color, then so are $f(X)$ and $f(Y)$. Since $X$ and $Y$ are of the same color, there exists $h\in H$ such that $h(X)=Y$. Since $H$ is normal, $h_1:=f^{-1}hf\in H$. But $h_1(F(X))=F(Y)$, hence $F(X)$ and $F(Y)$ are of the same color.
|
As the tesselation is regular, its symmetry group G acts transitively; let H be a subgroup strictly containing the stabilizer of a face. Then the orbit of the stabilized face under H, and its translates by other elements of the group, form a symmetric coloring---if H is of finite index and does not act transitively, it gives a coloring of the form you want. In the case of a tetrahedron, any group strictly containing the stabilizer of a face acts transitively on the faces.
More conceptually, in any case but a tetrahedron, quotienting by the stabilizer gives a group that acts simply transitively on the faces, and so is isomorphic to the set of faces---the colored sets are just the cosets.
So the question becomes---does such a subgroup exist in the triangle groups (the group of symmetries of a regular tesselation of the hyperbolic plane)?
|
https://mathoverflow.net
|
266,114
|
[
"https://dba.stackexchange.com/questions/266114",
"https://dba.stackexchange.com",
"https://dba.stackexchange.com/users/207841/"
] |
I have bunch of ids <code>someIds</code> (20 - 100 thousands) and table with more than 12 millions rows like that:
<pre><code>spaceShips(
id BIGINT,
shipType TEXT,
shipName TEXT,
hasArtificialIntelligence boolean
)
</code></pre>
And I need for all rows where <code>shipType='warship'</code> (about 2 millions) set field <code>hasArtificialIntelligence</code> true if <code>spaceShips.id</code> in <code>someIds</code> otherwise set false. Is there a better way than two update queries?
|
Just an FYI, in your sample data you don't have any KUL destination airports.
Here is the query that will give you the results you are looking for. Please substitute the WHERE predicate clause with the origin and destination airport of your choice:
(I have created a dataset name flowlogistic and a table named flights)
<pre><code>select origin,
destination,
airline,
avg(minutes) as average_trip_time
from `flowlogistic.flights`
where origin = 'FRA' and destination = 'CDG'
group by
origin,
destination,
airline
order by
average_trip_time;
</code></pre>
|
<pre><code>SELECT airline , avg(minutes) as avg_time
FROM 'sample_data.flights'
WHERE origin = 'FRA' and destination = 'KUL'
GROUP BY airline
</code></pre>
|
https://dba.stackexchange.com
|
4,575
|
[
"https://mechanics.stackexchange.com/questions/4575",
"https://mechanics.stackexchange.com",
"https://mechanics.stackexchange.com/users/2343/"
] |
Recently, I was informed that a car I share ('02 Nissan Sentra) was close to overheating. I was told it gets close to the upper limit very quickly upon running the car. When I inspected the vehicle, I realized that the coolant overflow was empty and the radiator's coolant level was rather low (not sure exactly how low but it seems at most 75% remains).
The car must be taken to a mechanic. However, the mechanic I trust and usually use is about 30 miles away. I assume there is a leak in a hose, connector, or perhaps the water pump. The radiator is new so I do not suspect it is the source of this issue.
Because I am assuming there will be a leak, I think that when I drive it, it will most likely loose coolant. Also because of this assumption, I am weary of putting coolant in if it is just going to leak out.
I am assuming I will have to make a few stops to fill the overflow (not touching the radiator). Can I put pure water in the radiator and overflow tank and limp the car 30 miles (mostly on the freeway - yikes)?
<hr>
<strong>Edit</strong>
Great answers here, I just wanted to post what I actually ended up doing in the hopes of it helping someone at some point, or maybe to satisfy curiosity.
I decided not to go with pure water because I have read that a 50/50 ratio of coolant to water is desirable and figured even if it were leaking, should it be slow, I should follow best practice.
I was unsure what type of coolant my car used, so I just went to the dealer and got a container of what was standard.
The reservoir was completely empty, and the radiator had a low level of water. I added some water (maybe 1.5L) to the radiator to top it off and I filled the reservoir to the "MAX" line.
I read from a post that a simple test you can do is to start the car, run it a little, turn it off, and then listen for a leak. I tried that, but perhaps the car did not get hot enough, or there wasn't a leak evident to that method.
I then looked on the ground for any evidence of a leak and saw none. I started the car and let it run while looking to see if any fluid was leaking and could not see any after a minute or so.
After this I drove the car trying to maintain under 3000 rpm and made it to the mechanic's shop like that.
Overall, I would say this situation was mild in comparison to what could happen. I am definitely <em>not</em> recommending to drive a car with a major leak, or even with a medium one. My car exhibited no signs of leaking and based off of that I drove it.
|
First thing I'd try to figure out is how quickly it leaks - run it, stick your head underneath it as check if there is any visible leaking. If there is, chances are that it's not going to make it for 30 miles. I'd also check for any evidence of oil and water mixing. If there is, don't drive it.
If it's not leaking that badly I'd be tempted to top it up with plain water just to check if that reduces the temperature to something more normal. If it doesn't there's a chance that you either have a thermostat that's failed closed, a blockage in the cooling system or a really bad water pump. Again, check for leaks with the engine running and warm, any visible leaks this side of the odd drip and I'd be very careful about driving the car.
In general, water on its own tends to cool better than water + antifreeze, so you're not risking any damage from running water in the engine provided the outside temperature doesn't drop below freezing. Even then, you probably are going to be OK as hopefully there is a little antifreeze left in the existing coolant. Either way, advise the mechanic that he might have to drain the cooling system immediately depending on the outside temperatures.
If in doubt, I would get the car transported to the mechanic - even if you pay retail for the recovery it's likely to be cheaper than a blown engine, unless the engine is fairly terminal already.
|
I agree with Timo - if it is a big enough leak that you can see it clearly, then getting the car transported is much safer.
In general, using water as coolant is OK for a short time or as a "get you home" alternative, but it does not have the anti-freeze and corrosion inhibiting properties of a proper coolant mix, so should not be left in the engine for any length of time, especially if you live in a cold climate.
Additionally, you should avoid adding cold water to a hot engine unless you have no other choice - there is a risk that it can cause a thermal shock and risk cracking the block, which would end up very expensive!
|
https://mechanics.stackexchange.com
|
120,777
|
[
"https://mathoverflow.net/questions/120777",
"https://mathoverflow.net",
"https://mathoverflow.net/users/12858/"
] |
Suppose that $Q$ is a quaternion division algebra with center $k$, where $k$ is an arbitrary commutative field (let's say with $\operatorname{char}(k) \neq 2$ if necessary). Assume that $D$ is an arbitrary skew field (which a priori has nothing to do with $Q$ nor with the base field $k$), and assume that there is an injective ring homomorphism $\varphi \colon D \hookrightarrow Q$.
<blockquote>
<blockquote>
Is it true that $D$ is either a commutative field or a quaternion division algebra again?
</blockquote>
</blockquote>
My first guess was that this should be obviously true, but failing to see an obvious argument, I wonder whether it's true at all...
|
First note that if $A$ is the $2\times 2$ matrix algebra over a field, and $z\in A$ has trace zero, then by the Cayley Hamilton Theorem, $z^2=-det (z)$ is a scalar. Hence, if $x,y \in A$ then $(xy-yx)^2$ is a scalar matrix. Secondly, if $A$ is the $n\times n$ matrix algebra over a field, with $n\geq 3$, then it is easy to get two matrices $x,y\in A$ such that $(xy-yx)^2$ is $not$ a scalar.
Suppose $D$ is a skew field, and is finite dimensional (of degree $n$) over its
centre $K$. We may assume that $D$ is not commutative. Then the centre cannot be a finite field and hence $D$ is Zariski dense in $D\otimes _K {\overline K}\quad $ (${\overline K}$ is the algebraic closure of $K$). If $degree (D)\geq 3$, then by the preceding paragraph, $(xy-yx)^2$ is not in $K$ for some $x,y\in D$.
Your condition says that for all $x,y\in Q$ we must have $(xy-yx)^2$ lies in the centre of $Q$. Hence the same is true for $D$. Hence $D$ is indeed quaternionic.
[Edit] Tom is right. You do not need to assume that $D$ is finite dimensional over its centre. This can be proved as follows. Let $K$ and $L$ be the centres of $D$ and $Q$ respectively. Since $Q$ is quaternionic, the $L$ vector space spanned by $D$ in $Q$ is (an algebra) and is therefore all of $Q$. Hence $K\subset L$. I will now prove that the trace of $b\in D$ and the norm of $b$ (all viewed in $Q$) lie in $K$ itself. If $b\in K$ this is clear.
If $b\notin K$, then there exists $a\in D$ which does not commute with $b$. The equation
$$ab^2-b^2a= trace (b)(ab-ba)$$ follows from Cayley Hamilton in $Q$ and shows that $trace (b)$ is an element of $D$. Hence the norm also: $det (b)=trace (b)b-b^2$.
If $a,b\in D$ and don't commute, then it follows that the $K$ vector space spanned by $1,a,b,ab,ba, aba, bab$ is a sub-algebra $R$ and is hence quaternionic. This can be extended to any three generic elements elements $a,b,c$ as well. Hence every generic element $c\in D$ already lies in the subalgebra $R$. That is: $D=R$ is finite dimensional over $K$.
|
Yes, it is. This follows from the theory of central simple algebras. Here's another proof, possibly similar to Aakumadula's: let $D'=D\otimes \bar{k}\cong GL_2(\bar{k})$. If $D'$ has basis $1,a$ then it's commutative; otherwise, it has basis $1,a,b$. I claim that $a,b$ have a common eigenvalue in $\bar{k}^2$. Indeed, suppose not. Then $a,b$ are diagonalizable. Pick a basis in which $a$ is diagonal. Then $b$ won't be either upper- or lower-triangular, and you can see combinatorially that $1,a,b,ab$ are linearly independent, so $D'=Q'$ and $D=Q$. Thus $D'$ is the space of upper-triangular matrices in some basis. But this is impossible, e.g. since then $D'$ has an ideal fixed by the Galois action.
|
https://mathoverflow.net
|
3,664,951
|
[
"https://math.stackexchange.com/questions/3664951",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/707142/"
] |
The statement <span class="math-container">$$\lim_{x\to -\infty}(2+3x^2)=\infty$$</span>
certainly means that the limit doesn't exist at <span class="math-container">$-\infty$</span>. To prove this by the definition of limit requires the use of its negation <span class="math-container">$$(\vert x-p\vert < \delta) \land\vert f(x)-L\vert\geq\epsilon$$</span>
This function is real valued, so we can't construct any argument by <span class="math-container">$p=\infty$</span> and so on. Rather, I think it should be sufficient to show that no matter what <span class="math-container">$\delta$</span> is chosen, when <span class="math-container">$p\to \infty$</span>, the distance <span class="math-container">$d(f(x),L)\to \infty$</span> and as such is greater than any <span class="math-container">$\epsilon>0$</span>. What steps should I take to make this proof work?
|
All the indices have to be different (because it's alternating), and if two multiindices are permutations of each other then the corresponding vectors are the same up to a sign.
Demanding that the indices are increasing is a way to force them all to be different while simultaneously picking a single element out of all the equivalent permutations.
|
There is, however, an instance to have into account where the order of basic 2-form break the ascending indexing one.
The example is one of the Stokes' theorem, from the old vector calculus.
This, relates a 1-form <span class="math-container">$Pdx+Qdy+Rdz$</span> and a 2-form
<span class="math-container">$$Ady\wedge dz+Bdz\wedge dx+Cdx\wedge dy,$$</span>
where the coefficients
<span class="math-container">$P,Q,R,A,B,C$</span> are smooth functions on the cartesian coordinates <span class="math-container">$x,y,z$</span> of <span class="math-container">$\mathbb R^3$</span>.
It s well known that if
<span class="math-container">$$A=\frac{\partial R}{\partial y}-\frac{\partial Q}{\partial z}\ ,\
B=\frac{\partial P}{\partial z}-\frac{\partial R}{\partial x}\ ,\
C=\frac{\partial Q}{\partial x}-\frac{\partial P}{\partial y}$$</span>
then a line integral is equal to a double one
<span class="math-container">$$\int_{\partial\Omega}Pdx+Qdy+Rdz=
\iint_{\Omega}Ady\wedge dz+Bdz\wedge dx+Cdx\wedge dy,$$</span>
where <span class="math-container">$\Omega$</span> is a bounded surface on the space with border <span class="math-container">$\partial\Omega$</span>,
and
under the comply of some simple assumptions over the functions on this game.
|
https://math.stackexchange.com
|
3,297,196
|
[
"https://math.stackexchange.com/questions/3297196",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/636281/"
] |
My professor often says that <strong>every metric space is a topological space</strong>.
But reading the definitions of both terms, it does not make sense to me to state it. That every metric space induces a topological space, I agree. But both are not the same thing. One space has a metric, and the other has a topology. Both things are completely different by definition.
What do you think about it?
This question turns out to be more evident to me when I had to prove that "every metric space is normal". But normal separability is defined on topological spaces...
|
I agree that the terminology your professor uses might be technically imprecise. Part of the issue here is that in common mathematical parlance, we might use "topological space" to refer to the pair <span class="math-container">$(X,\tau)$</span> -- the set of objects and the collection of sets defining the topology -- or just to the set <span class="math-container">$X$</span>. Similarly, we might use "metric space" to refer to the pair <span class="math-container">$(X,d)$</span> -- the set of objects and the distance function on that set -- or just to the set <span class="math-container">$X$</span>. This is common usage even though in each case the definition is the pair, not just the set <span class="math-container">$X$</span>. With some experience you'll get used to determining what is meant based on context.
So, what is precisely meant by "every metric space is a topological space" is that the metric induces a topology, as you surmised. What is precisely meant by "every metric space is normal" is that for every metric space <span class="math-container">$(X,d)$</span>, the topological space <span class="math-container">$(X,\tau(d))$</span> -- where <span class="math-container">$\tau(d)$</span> is the topology induced by <span class="math-container">$d$</span> -- is normal.
|
Formally speaking you are quite correct. A metric space <span class="math-container">$(X,d)$</span> is a very different structure than a topological space <span class="math-container">$(X,\mathcal{T})$</span>. But the metric <em>induces</em> in a standard way a topology <span class="math-container">$\mathcal{T}_d$</span> by taking all open balls as a base.
Now we can also see <span class="math-container">$(X,d)$</span> as a topological space too, and we can talk about it being normal, Hausdorff, compact, connected and all other notions that are defined for all topological spaces.
It turns out that metric induced topologies are special and have properties that
general spaces need not have. So it's useful to know the topology is actually from a metric. There are even theorems that give necessary and sufficient conditions on a space <span class="math-container">$X$</span> to see it can be induced by some metric on <span class="math-container">$X$</span>, even if you didn't know this by construction.
Being metric allows us to talk about non-topological notions like (total) boundedness, and completeness e.g. And do more "geometric" theory.
|
https://math.stackexchange.com
|
673,172
|
[
"https://physics.stackexchange.com/questions/673172",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/317102/"
] |
Two positions of two particles can be modelled by the vector equations; <span class="math-container">$$ r_1(t)=\begin{pmatrix} t-1\\t^2-1\end{pmatrix} $$</span> <span class="math-container">$$ r_2(t)=\begin{pmatrix} e^{-t}-1\\ e^{-2t}-1\end{pmatrix}$$</span>
Find the coordinates of the point where the particles collide.
Just reading this question I was confused because previously in the question it was proven that both particles follow the curve <span class="math-container">$y=x^2+2x$</span> I thought there would be infinite points on collision. When solving the system of equations that come from <span class="math-container">$r_1(t) = r_2(t)$</span> I got to <span class="math-container">$0=0$</span> which means infinite solutions? I'm unsure because the textbook I'm working from got a single point which was <span class="math-container">$(-0.43,-0.68)$</span>.
|
You got <span class="math-container">$0=0$</span> when you wrote 2 separate eq-s, and they appeared to be identical, right? But that only means that collision <em>happens</em>, i. e. when <span class="math-container">$x_1 = x_2$</span> => <span class="math-container">$y_1 = y_2$</span>. You still need to find these values. Solve any of those equations (as I said, they are identical) and you get the answer. BTW, there is no analytical solution. The answer is given in special functions, specifically Lambert W function.
|
The first thing I think when I see this problem is "coordinate transformation". In particular, the translation:
<span class="math-container">$$\begin{pmatrix} x'\\y'\end{pmatrix} =\begin{pmatrix} x\\y\end{pmatrix}+\begin{pmatrix} 1\\1\end{pmatrix}$$</span>
Then:
<span class="math-container">$$ r'_1(t)=\begin{pmatrix} t\\t^2\end{pmatrix} $$</span> <span class="math-container">$$ r'_2(t)=\begin{pmatrix} e^{-t}\\ [e^{-t}]^2\end{pmatrix}$$</span>
so the fact that:
<span class="math-container">$$y'=x'^2$$</span>
for both trajectories is trivial, and the solution reduces to one dimension, requiring:
<span class="math-container">$$t= e^{-t}$$</span>
That is a famous equation, solved with the Lambert <span class="math-container">$W$</span>-function.
|
https://physics.stackexchange.com
|
6,789
|
[
"https://scicomp.stackexchange.com/questions/6789",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/4052/"
] |
I apologize if this is a naive question. I'm trying to create some boostrap data for a system of linear, ordinary differential equations at steady state.
Since the equations represent the concentration of chemical species, X has to be positive. Further, I'm trying to find the sparsest possible matrix of A, for the sake of keeping the dependency graph of the equations as small as possible.
Using the cvx toolbox in MATLAB, I put the problem like this:
<pre><code>clear
% the threshold value below which we consider an element to be zero
delta = 1e-8;
% problem dimensions (m inequalities in n-dimensional space)
n = 25;
X = rand(n, 1)
b = zeros(n, 1);
c = zeros(n, n) + delta^2
alpha = 0.1
% l1-norm heuristic for finding a sparse solution
fprintf(1, 'Finding a sparse feasible point using l1-norm heuristic ...')
cvx_begin
variable A(n,n)
%minimize(nnz(A))
minimize(alpha * nnz( A ) + (1-alpha) * norm(A*X - b, 2) )
cvx_end
% number of nonzero elements in the solution (its cardinality or diversity)
nonzero = length(find( abs(A) > delta ));
fprintf(1,['\nFound a feasible A in R^%d that has %d nonzeros ' ...
'using the l1-norm heuristic.\n'],n,nonzero);
</code></pre>
This tends to find matrix of A which are filled with incredibly small near zero values. Does anyone have suggestions for heuristic approaches to find a combination of X and A which satisfy the constraints I described above? They don't need to be particularly fast or memory efficient, I'm working with relatively small matrix.
To be clear - I'm not trying to solve for a particular A or X. I'm trying to find a combination of A and X that satisfy the constraints I described.
|
Without loss of generality we may treat the case $X$ is all-ones vector $u$, since forming $AD^{-1}X$ is equivalent to $Au$ where $D = \text{diag}(X)$ is the diagonal matrix with entries from $X$, and $A$ has the same sparsity as $AD^{-1}$.
Mention is made of "keeping the dependency graph of the equations as small as possible". Aside from avoiding the triviality of a matrix of all zeros, one might want to impose a condition that this dependency graph be connected, or more weakly, that no row of $A$ is allowed to be all zeros.
If the goal is to minimize the number of nonzero entries in $A$, subject to $Au=0$, then we must have at least two nonzero entries in each nonzero row of $A$. Moreover this minimum can be attained by placing both a $+1$ and a $-1$ entry in each such row.
If connectedness of the dependency graph is sought, it can be achieved without further loss of sparsity. For example, define $A = I - P$ where $P$ is the permutation matrix s.t. $P_{1,n} = 1$, $P_{i+1,i} = 1$ for $i=1,\ldots,n-1$, and otherwise $P_{i,j} = 0$.
|
You talk about l1-heuristics, but in the code it looks more like you are interested in solving the underlying combinatorial problem involving the nnz of A. In a l1-heuristics, you would typically use norm(A(:),1) or something like that. The fact you only get zeros is natural as the problem is homogenous in A when b is 0. You can introduce an arbitrary scale in the problem though, such as sum(sum(A))=1 to ensure the solution is non-zero.
|
https://scicomp.stackexchange.com
|
588,299
|
[
"https://physics.stackexchange.com/questions/588299",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/234261/"
] |
As far as I have understood, the case is that there is nothing that argues that time or space is continuous, but at the same time we must assume this in order to be able to calculate derivatives or integrals with respect to these, how can we justify this?
|
Let's say space is really a lattice with spacing <span class="math-container">$\Delta x$</span>. It turns out that this idea has more trouble with experiment than you might think, but we can plow ahead for the purposes of this question.
You might propose replacing integrals in physics with discrete sums over individual lattice points, to take a concrete example let's think about the work needed to move a particle from point <span class="math-container">$A$</span> to point <span class="math-container">$B$</span>
<span class="math-container">\begin{equation}
W = \int_A^B \vec{F} \cdot {\rm d} \vec{x} \rightarrow \sum_{i=1}^N \vec{F}(\vec{x}_i) \cdot \hat{e}_{i,i+1} \Delta x
\end{equation}</span>
where <span class="math-container">$i=1,2,...,N$</span> labels the lattice points that the particle follows in going from <span class="math-container">$A$</span> to <span class="math-container">$B$</span> and <span class="math-container">$\hat{e}_{i,i+1}$</span> is a vector pointing from the lattice space at <span class="math-container">$i$</span> to the lattice point at <span class="math-container">$i+1$</span>.
If <span class="math-container">$\Delta x$</span> is small enough so <span class="math-container">$N$</span> is large enough, these two quantities will be quite close (since in the limit of infinite <span class="math-container">$N$</span> the two quantities are actually exactly the same). To see a difference (if there is one) we need to probe distances of the same order or smaller than <span class="math-container">$\Delta x$</span>, or else have a large precision to tell the difference between these two expressions.
Here's the point. No one has <em>ever</em> found any disagreement between experiment and theory that can be attributable to the failure of the continuum limit. If there is such a <span class="math-container">$\Delta x$</span>, it must be so small that it is a very good approximation to use integrals instead of sums over lattice in all experiments done to date. You can think of the LHC as probing energy scales of order 1-10 TeV, which amounts to <span class="math-container">$10^{-18}-10^{-19}$</span> meters -- so <span class="math-container">$\Delta x$</span>, if it is nonzero, must be smaller than this.
There are other problems with having a lattice, but this is already a powerful argument that the world is at least effectively continuous at the scales we can probe.
|
This is a comment, as Andrew's answer is adequate for the problem.
I want to point out , which is not clear in your question, the difference between mathematical modeling and the object modeled.
When modeling an object mathematically one can use continuous variables by the function of mathematics. If the object modeled has discontinuities, the mathematics will model it with continuous variables. A crystal lattice for example can be modeled in continuous (x,y,z) variables with discontinuities in the mathematical model where the atoms are.
In the case of space-time there are really two concepts overlapping: the mathematical variables of space and time to be used in modeling it, and the object modeled, i.e. space and time. The mathematical variables to model a latticed space-time can be continuous, and give the functional form of the latticed physical space-time. The <span class="math-container">$x$</span> in Andrews answer is still a continuous variable used to model a latticed space.
|
https://physics.stackexchange.com
|
520,293
|
[
"https://physics.stackexchange.com/questions/520293",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/235379/"
] |
Suppose I have a wave function <span class="math-container">$\psi $</span> we express it in a continous states as
<span class="math-container">$$\psi= \int_{-\infty}^{\infty} dxC (x)\rvert x\rangle = \int_{-\infty}^{\infty} dx\rvert x\rangle \langle x \rvert \psi (x)$$</span>
This can be expanded in Riemann sum as
<span class="math-container">$$ \psi= \lim_{\Delta x\to 0} \sum_{i=-\infty}^{\infty} \Delta xC (x_i)\rvert x_i\rangle.$$</span>
This is not symmetric with the expression for <span class="math-container">$\psi $</span> in other discrete Hilbert spaces.
Which is
<span class="math-container">$$\psi= \sum_{i=-\infty}^{\infty} C (\phi_i)\rvert \phi_i\rangle,$$</span> where this <span class="math-container">$i$</span> takes discrete values.
Symmetry is lost due to the appearance of the factor <span class="math-container">$\Delta x$</span>.
Can any one please help me to solve this paradox?
|
There is no paradox. If you have the continuous relation as
<span class="math-container">$$| \psi \rangle = \int_{-\infty}^\infty dx \ \psi(x) |x \rangle = \lim_{\Delta x \rightarrow 0} \sum_i \Delta x \psi(x_i) |x_i \rangle,$$</span>
to draw an analogy with the discrete basis expansion <span class="math-container">$\sum_i C_i |e_i \rangle$</span>, you need to realize that the analogy goes as <span class="math-container">$C_i \leftrightarrow \Delta x \psi(x_i)$</span>, not <span class="math-container">$C_i \leftrightarrow \psi(x_i)$</span>.
For a better intuition, think of it this way, the continuous basis has infinitely many more basis vectors than a discrete basis, meaning that the component of any state <span class="math-container">$|\psi \rangle$</span> along any of these basis vectors <span class="math-container">$|x\rangle$</span> <em>has to be infinitesimal</em> for the expansion to make sense. So <span class="math-container">$\psi(x) =\langle x | \psi \rangle$</span> is more of a <em>component density</em> (for lack of a better term).
This is similar to any other part of physics where densities are involved. For example, the total mass of a system of discrete point masses is:
<span class="math-container">$$M = \sum_i m_i,$$</span>
whereas for a continuous mass distribution it's
<span class="math-container">$$M = \int_\mathcal D d^3\mathbf x \rho(\mathbf x) = \lim_{\Delta V \rightarrow 0} \sum _{i} \Delta V \rho(\mathbf{x}_i).$$</span>
Here you can make the analogy <span class="math-container">$m_i \leftrightarrow \Delta V \rho(\mathbf x_i)$</span>, i.e. the mass of an infinitesimal volume element in the continuous distribution is <span class="math-container">$\Delta V \rho(\mathbf x_i)$</span> (and not <span class="math-container">$\rho$</span> itself).
Similarly, the component of <span class="math-container">$|\psi \rangle$</span> along the basis vector <span class="math-container">$|x \rangle$</span> is <span class="math-container">$\Delta x \psi(x)$</span>, with <span class="math-container">$\psi(x)$</span> playing the role of a "density".
|
There is no paradox. You're comparing two different formulas: one is an approximation that applies to a continuous basis of states <span class="math-container">$|x\rangle$</span>, the other is an exact formula that applies to a discrete Hilbert space. Just because two formulas look different doesn't mean one of them is wrong.
|
https://physics.stackexchange.com
|
1,728,175
|
[
"https://math.stackexchange.com/questions/1728175",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/-1/"
] |
<strong>Question:</strong> Show that the problems $ax'' + bx' + cx = f(t); x(0) = 0, x'(0) = v_0$ and $ax'' + bx' + cx = f(t) + av_0 \delta(t); x(0) = x'(0) = 0$ have the same solution for $t \gt 0$. Thus the effect of the term $av_0 \delta(t)$ is to supply the initial condition $x'(0) = v_0$.
The first thing I notice is that these two differential equations involve a few initial values, so naturally I approach the problem using the Laplace transform. I begin with the first equation: $$ax'' + bx' + cx = f(t)$$ becomes $$a(s^2 X(s) - v_0) + bsX(s) + cX(s) = F(s) \rightarrow as^2 X(s) - av_0 + bsX(s) + cX(s) = F(s)$$
Solving in terms of $X(s)$ yields $$X(s) = F(s)G(s) + av_0 G(s)$$ where $G(s) = \frac{1}{as^2 + bs +c}$. I then take the inverse Laplace of this equation to get $$x(t) = \int_{0}^{t} f(\tau)g(t - \tau) d\tau + av_0 g(t)$$
I do the same for the second equation, $ax'' + bx' + cx = f(t) + av_0 \delta(t)$, but end up with $$x(t) = \int_{0}^{t} f(\tau)g(t - \tau) d\tau + av_0 u(t - a)g(t - a)$$
From here I am unsure about what to do. I have two <em>distinct</em> values for $x(t)$, yet I am supposed to show that the two have the same solution? Have I made any errors? What do I do now? Also, what is the intuition behind the very last sentence, specifically that "the effect of the term $av_0 \delta(t)$ is to supply the initial condition $x'(0) = v_0$"?
|
To say that a number $u$ is "within $x \%$ of $y$" means that
$|u - y|/|y| \leq x/100$ (assuming $y \neq 0$).
Equivalently, $u \in [ y - \frac{x}{100}|y|, y + \frac{x}{100}|y|]$.
|
Percentage difference and percentage change sound the same to me - maybe there's something I'm just missing.
To compute the upper and lower bounds, you figure out what $x%$ of $y$ is using a proportion, and then add and subtract that number from $y$. In your example, $10%$ of $100$ is $10$, so as observed in the comments, we add and subtract $10$ from $100$.
Subtracting $10$ gives $90$, and adding it gives $110$, so you are within $10%$ of $100$ if you are at least $90$ and less than $110$. In interval notation, you are in the interval $[90, 110]$.
|
https://math.stackexchange.com
|
1,645,661
|
[
"https://math.stackexchange.com/questions/1645661",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/276025/"
] |
I am looking for a combinatorial proof for it. I know how to prove it mathematically. Expanding $(1+x)^n$ and replacing $x$ with $1$ will give me the result but I am not able to explain it combinatorially.
Note: This is not a homework question. I am just curious.
|
As usual $[n]=\{1,\ldots,n\}$. $\sum_{k=1}^{n-1}\binom{n}k$ is clearly the number of non-empty, proper subsets of $[n]$, since $\binom{n}k$ is the number of subsets of size $k$. Now let $A_k$ be the number of subsets of $[n]$ with maximum element $k$; clearly $|A_k|=2^{k-1}$, since the rest of $A_k$ can be any subset of $[k-1]$. Thus,
$$\left|\bigcup_{k=1}^nA_k\right|=\sum_{k=1}^n2^{k-1}=1+\sum_{k=1}^{n-1}2^k\;.\tag{1}$$
On the other hand, $\bigcup_{k=1}^nA_k$ is clearly the set of non-empty subsets of $[n]$, so $(1)$ counts all of the non-empty, proper subsets of $[n]$ <strong>plus</strong> the set $[n]$ itself. Subtracting $1$ for the set $[n]$ leaves the desired result: both $\sum_{k=1}^{n-1}\binom{n}k$ and $\sum_{k=1}^{n-1}2^k$ count the non-empty, proper subsets of $[n]$, and they must therefore be equal.
|
Consider the set of $n$ bits integers.
One one side, group them by the number of ones. The sizes of the groups are $\dbinom nm$.
$$0000\|0001\ 0010\ 0100\ 1000\|0011\ 0101\ 1001\ 1100\ 1010\ 1100\|1110\ 1101\ 1011\ 0111\|1111$$
On the other side, group them by prefix made of $n-m-1$ zeroes followed by a single one (except for the first group). The sizes of the groups are $1$ and $2^m$.
$$\color{blue}{0000}\|\color{green}{0001}\|
\color{green}{001}0\ \color{green}{001}1\|
\color{green}{01}00\ \color{green}{01}01\ \color{green}{01}10\ \color{green}{01}11\|
\color{green}{1}000\ \color{green}{1}001\ \color{green}{1}010\ \color{green}{1}011\
\color{green}{1}100\ \color{green}{1}101\ \color{green}{1}110\ \color{green}{1}111$$
|
https://math.stackexchange.com
|
2,174
|
[
"https://bioinformatics.stackexchange.com/questions/2174",
"https://bioinformatics.stackexchange.com",
"https://bioinformatics.stackexchange.com/users/156/"
] |
I have a fastq file from minION (albacore) that contains information on the read ID and the start time of the read. I want to extract these two bits of information into a single csv file.
I've been trying to figure out a grep/awk/sed solution, but without success.
E.g.
<pre><code>@93a12f52-95e5-40c7-8c3e-70bf94ed0720 runid=17838b1d08f30a031bf60afabb146a8b0fba7486 read=12217 ch=492 start_time=2017-07-04T06:42:43Z
CTATTGTCCCCTGCCGCTGCCCCTCCTGCTACGCCCCACTGCCTCACCAGCCGTTACGGTCGCCCCCCATCGCATGCCTTTACACACACACTTCTTTACACATGCTATCTTCCC
+
""*)&$-.,-(#"&'$%''+16#"$##&)%%/"+*(*(&#"&'%1"+)#)""%$$#&&'1%"'8>MJ<#'&%'.2'.$(&#'()'&&%'$('%"%%%..$#"&"#+&,*$%"#"
@ff37e422-a25f-404c-8314-ef1733f9c30c runid=17838b1d08f30a031bf60afabb146a8b0fba7486 read=8432 ch=200 start_time=2017-07-04T06:56:41Z
CGATGGCCGTATGCTTTTGTTATGAAGCGAAAAGCTGCTCGCTTCTCTAGATAATAATGATGTGGCGAAAACGCTATGCGATTCGTTGACATACATGATGGCGGATTTATCTACCACTTTGTGGCATGCTTTTCTCGCCAGATAATGGAATGTTTCTCTGGCGGTAATGGATAGTATCAAATCTCACTAGCCCATTCTATAAGCGCATCCGCATGCACTAGTTCTTGATTCGATCGTCCTCTAGCATGTTCGAAGAAGATAGCATTCACTATCATCATCGCCTCAGGTAAGTTTATTCGGTTGGGGCGTGTGAAGGCAAACACCTTGTTGTCCAGTAAGTTTTCAGTTACTATAACTTAAAGTCGCACATGAATCTAGTCTCCTATTCCCCACCCATGATCCCACTCACACATTTCTACAGAGATGTGGTTAGAAATTTTCATATTAGGTCAGCTTTGACTCAATAAGACATAATTCTTCACTGAATGACTTTTTAAGAACCACCAGGACCAGAGAGAACCAAGAGAGTGGTACCTCTTAAAACACAATAAAGTGATTCAGCCTTAGCCATTGGATTCTGGAGGACCTTGAACCATGTGGGAAGCAGCTCAGGGTGGCCATGTACTATACTGGCGGGTAAGCTTCTGGAGTGCTAGGTTCTTTTTGTCTTTTCTTAAGCATTGCCGCCAGTTGATTGGGTTTTGAACATAAAATAATGCGCCACCAGCAATTCCAGATTTGTTCCTACGGGATAGATTTGTTCAGTTCTAGCATTATGCTTCACTAACCAGATGCGGGCCCTAAGTCCTTCACTTGGAATATTGGATTGGATCATGAGAATATTCTGTCTGAAGCTCGTCATTAATTTTGTTACAAAATAGAGCTTTTTGACTGGAAGTACCACCATACGTGTTCTCAAACTTCAGCATTTTTAGAACTTCCCACGGCATCTTGACCCTTTTCACAGCATGGATAGTCAGGCAGCAGTGAACTTTGTGACTCTTTAATGCCTTCACTTTTCTCTCAGTTTCCCCGCCTTGCGTTATCTTTACTCGTCTTGGGACTTTTATCCCAATGCCAGCCTTCTACCCTGAGACCTCAGTGGGTCATCATCCCAGCCCGGGACATCTCATCCCATCATTTATGGGCTGTTGTGTTTTTTTCAAAACCTAGCCCTCTCAGGAGGAGGAGGAGTGGGAGTCAGTTCAGTGAGGAGGATTAGGATGATCTGAAATGTAAGCACATATAAGCGAAGCACTTATTTTGGGTTGGGTCCTCACGGTGGACATAAGATCGCCTTATGTGTTTAGTAAGCCATTCCTAGCTCTCAATGGCGTGATTACATAGAAGCGTGAGGGATCAGTCCTATGGAAGACTAGGAAGTAAATGAACAAAATATATTAACCATAGAAGTCTCATGGGTCGCTGTAGCCAAAAGATTAACACTTTTGACTACATTGTGGTTTTAGGCATTGAAACAAAAACTTTGAGTCTCCTAAACAAATGAATGGAAAATAGTAGCGAACTTCGATTCCTAACATTAAATCTAGAAATAGCAAGTTAGTTTAAAGACTTTATTTAGCTTTGCTTGCTATAATGAAAACCTTGCCTCCCGGTCGGGGCCATTGTGCCTGAAGCTAGCTTATTGTCTCCTCGAGCTCCCAGCTTCAGCAACTCCTTTTGAAGCGTTTGTCTCAGCTTGGATCTTCAGCAGCTCTTGGTGGCTCTTTTGAGCTAGCTCCTCTGAGATCTTGTATTTGGTAGGTCGCTTAGTCATAGTACTTTTCTTTTAACACCCTTCAGCTCTACGATTACATTTGGTTTTGTGGATATCATAATGATTGATGTGAAGATACATTGTACATGTG
+
"#""#"##"'"&%&?JP7+80)'&*+&&(,.>3&*(#%(')*&'*#(-$)"&,63;?844&&#$'3+.>@9;-/259...&:"$"/*"#*#%(.&%)+,/76/+'4B93:;70.1)'1:)*#&()(+'03589/++&)'$-:;@,B5(9+.JAU;7-+&,1212.+329NPSDHRHH6),&)%)+).-*$,<1(.+&$$2&,-0+.&'//"4(%"+#,/)<56%*%*$<4*/;DI+6>((-%*).+--;2-'04-062<@-<>6CP)#.0/+,#(*2+#"$$-2'5(%%&'5+)$$%8+0;>.@9*(,((,*-2/49G9/1/3./.0CQ93E+/-D-'67J*)($-2+'*/&*)+%&'092<C9/5()$&%(.,+-'**)'2/.(//6<59-5-)"**40:A4+834)-(*+;K0-,&)+%)(#-&+&&))+$>9.6<?A;3'/85$/--..)/3DP99HJ9."2;I?@C/5%.4<0+0+&$-05C46208AFQE80"%356+02*9I73//(-*/9706192*/)%)(&'($1749C*KKA*/#"/*)5($'#+7@=$%)/&+%##*&&;.),#(4.,2/'-G03%""#,(2)./,%%1;7#+%$'(03'5)+&.%1)&(%'#*,$3#''-3,),$"),'+,,*$%'%$33ILADPPY@JW\O2-8'9[Y@--*.,""$%$"%#"65',0/4(6.*43B1.#&)+),(6BI729=9&#$.)#,(,(/-286D&),'C;P=)'-'#$)*91*$%(.3LH31,($/(+)'$$*#+'((('*)00:7C1E24(%+$&-,$'++)<)'**"&''"(*(.14++$5&*,%?.F1<10"&%$,)&&*(&$%$/.""+3"#"#"*%5*->87FQ>AKY<9855,&(*@>HAZRQ)'%$"--&/#%.+,/A9F8-1'+"6--*34P-9;=997;7<SUU8((-+.:784/%).&)#*'(*/%7H(1JI-+&+)&9.,/N)"%"&))22/&,)%)'+*/17:8$''(*.*..(/.(%+*&22(+)7GM99A@9)0*1%/470((&((%(937'15084#,($"&'%"'4.,+5<7($*0+*291$*&$%%('+/)'.1-(-.7=K/'%"'*)03<:/<=1(>.((-9-'+>&&&%';,)69V<'1:>FNXA+01:7*#"$'$.%&4BQT7/&03++)'&'*3E.+0(*6..*<.)9*)))'%#(*.&&'"*-IU=&,-))-.,->'G--6..90067%(,0/*%9)&#'""#$,5H/$/#-/&2(.0.+;>9--7.3@+2(/4.*0&,-&'/*%0-**-#,/2&-$*)'+#"*)-*##&$3):C:D6H+',./2?<0>)%#&&3::9;8$#%#('$#$',,,*-346@=.%&&(&%('*+&*&1)'&-..#'+2*)/6499=-2*,0,82,$)($:2%*##%$,.5=-5/((#)2./('($@=4,)%1+176:AHQT,(,)0(()@2'+(18G?($)(*'.)&&+*12SD7,18I2)()*+C:01ETZIPDUOB+.55K7B:A5)*/('')/-1=7E<IGR=43@A68;K/:8.'--)BGEE5-0/5+,2''DXBLW7)#DHTC5AL82+--,&#);+).,'3)/.336/&'&(,+8E346-/%1-)1*$+),@-&-%'$+)5.&&%&#(,2&03&/((,(-5%%)?7I4*').+/7+&***845(*/.)(.0))<?YK3=GHH2&*)%$%#%KLY<9&&+,8;/2*./;+))-)/03$&*$+0'36<J+5/+&+(HGFFM57,59)-,0)/:**"%$&#$'&$%#)*%33VX=@FOL00(''4IGIYOG0=1,,#(*$)-0CFM?8+31E8)(+)(%'"'/""%+3/,1;+7-*''&-%98.$$/'&)&+&(&&"
</code></pre>
Should produce
<code>
93a12f52-95e5-40c7-8c3e-70bf94ed0720, 2017-07-04T06:42:43Z
ff37e422-a25f-404c-8314-ef1733f9c30c, 2017-07-04T06:56:41Z
</code>
|
<pre><code>awk '{if(NR%4==1) print $1, $5}' file.fastq | sed -e "s/ start_time=/, /" -e "s/^@//"
</code></pre>
The <code>awk</code> command gets the first of every 4 lines, printing the first and fifth "word". <code>sed</code> is then used to strip the initial <code>@</code> and replace <code>start_time=</code> with <code>,</code>. The output on your example file is:
<pre><code>93a12f52-95e5-40c7-8c3e-70bf94ed0720, 2017-07-04T06:42:43Z
ff37e422-a25f-404c-8314-ef1733f9c30c, 2017-07-04T06:56:41Z
</code></pre>
|
Since the string <code>start_time</code> will only appear on the header line, or else you don't have a valid fastq file, you can simply do:
<pre><code>$ perl -ne '/^@(\S+).*start_time=(.*)/ && print "$1, $2\n"' file.fastq
93a12f52-95e5-40c7-8c3e-70bf94ed0720,2017-07-04T06:42:43Z
ff37e422-a25f-404c-8314-ef1733f9c30c,2017-07-04T06:56:41Z
</code></pre>
Alternatively, since you mentioned <code>awk</code> and <code>sed</code>:
<pre><code>$ awk -v OFS=", " '/start_time/{print $1,$NF}' file.fastq | sed 's/start_time=//'
@93a12f52-95e5-40c7-8c3e-70bf94ed0720, 2017-07-04T06:42:43Z
@ff37e422-a25f-404c-8314-ef1733f9c30c, 2017-07-04T06:56:41Z
</code></pre>
Or, doing the whole thing in <code>awk</code>:
<pre><code>$ awk 'sub(/start_time=/,""){print $1", "$NF}' file.fastq
@93a12f52-95e5-40c7-8c3e-70bf94ed0720, 2017-07-04T06:42:43Z
@ff37e422-a25f-404c-8314-ef1733f9c30c, 2017-07-04T06:56:41Z
</code></pre>
And if the <code>@</code> annoy you:
<pre><code>$ awk 'sub(/^@/,"") && sub(/ .*start_time=/,", ")' file.fastq
93a12f52-95e5-40c7-8c3e-70bf94ed0720, 2017-07-04T06:42:43Z
ff37e422-a25f-404c-8314-ef1733f9c30c, 2017-07-04T06:56:41Z
</code></pre>
And in <code>sed</code>:
<pre><code>$ sed -n 's/^@\([^ ]*\).*start_time=\(.*\)/\1, \2/p' file.fastq
93a12f52-95e5-40c7-8c3e-70bf94ed0720, 2017-07-04T06:42:43Z
ff37e422-a25f-404c-8314-ef1733f9c30c, 2017-07-04T06:56:41Z
</code></pre>
Or, if your sed supports it:
<pre><code>$ sed -En 's/^@(\S+).*start_time=(.*)/\1, \2/p' file.fastq
93a12f52-95e5-40c7-8c3e-70bf94ed0720, 2017-07-04T06:42:43Z
ff37e422-a25f-404c-8314-ef1733f9c30c, 2017-07-04T06:56:41Z
</code></pre>
Finally, since <code>grep</code> can't do replacements, to use it you would have to do something like:
<pre><code>$ grep -oP '^@\K\S+|start_time=\K.*' file.fastq | paste - -
93a12f52-95e5-40c7-8c3e-70bf94ed0720 2017-07-04T06:42:43Z
ff37e422-a25f-404c-8314-ef1733f9c30c 2017-07-04T06:56:41Z
</code></pre>
And to get the commas:
<pre><code>$ grep -oP '^@\K\S+|start_time=\K.*' file.fastq | paste - - | sed 's/\t/, /'
93a12f52-95e5-40c7-8c3e-70bf94ed0720, 2017-07-04T06:42:43Z
ff37e422-a25f-404c-8314-ef1733f9c30c, 2017-07-04T06:56:41Z
</code></pre>
|
https://bioinformatics.stackexchange.com
|
43,770
|
[
"https://mechanics.stackexchange.com/questions/43770",
"https://mechanics.stackexchange.com",
"https://mechanics.stackexchange.com/users/24111/"
] |
My S60 has a developing problem where considerable quantities of soot are emitted from the exhaust. Upon first startup, cold engine, the smoke is nearly entirely absent. When the engine reaches some particular temperature (normally at consistently the same point on my journey home/into work) the message area shows "Engine Service Required". Shortly after this point in my commute there's a long uphill, and if I take it in 3rd gear at around 2000 rpm there's so much soot flying out of the exhaust it causes the trailing car's automatic headlight lights to turn on (if it has the facility, of course).
I've also noticed that there's a periodic hesitation in the power delivery -
again going up that hill in 3rd at about 2000 rpm, the power surges on and off slightly, every second or so. Generally the power delivery is poorer than it was, with the car sounding a little more "asthmatic/wheezy" than it used to
Fault codes read recently indicated fuel pressure issues, sometimes too low or high, and a code from the turbo control system (i think it was "6805 boost pressure control fault") but it doesn't cause limp mode (in its current manifestation)
A local garage said they didn't find any obvious intercooler or hose leaks but turbos aren't their speciality, and recommended another garage further away with more experience in forced induction
From the symptoms described, are there any causes more likely than others for the soot?
|
This one turned out to be a relatively common problem on D5s; the intercooler had split and boost pressure was being lost. Quite why the first shop didn't pick this up I've no idea, but having replaced the intercooler there's now a pronounced difference in extra power delivery and reduced emissions of soot
|
Does it have EGR - exhaust gas recirculation? If so, this could be part or all of the problem. Sounds similar to other cars with egr faults.
|
https://mechanics.stackexchange.com
|
196,048
|
[
"https://softwareengineering.stackexchange.com/questions/196048",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/89141/"
] |
<strong>Relevant Background Details</strong>
<ol>
<li>We've got two types of VMs (Utility Boxes & Web Servers) that developers need. </li>
<li>We are going to be using git for version control.</li>
<li>We have developers who have different preferences for their working environment (e.g. Some Linux, Some Windows).</li>
<li>I'm in the Linux camp and I believe Git and Windows doesn't mix as well as Linux & Git. However, this could be personal bias.</li>
<li>We using Linux in production.</li>
</ol>
<strong>Building the Vagrant VMs for distribution</strong>
<ol>
<li>Build a base box with the relevant OS for vagrant. </li>
<li>Using a configuration manager (e.g. Chef) to build out the Utility & Web images, convert them to new base boxes. It will clone service configurations from a centralized git repository.</li>
<li>Distribute the base boxes (really just virtual machine images) for the users to develop locally with vagrant. The distributed box will automatically pull in source code from certain git repos (e.g. libraries).</li>
<li>If changes are planned for the production environment, all developers will need to pull down new base boxes for vagrant as they come prepackaged. I think this is the simplest way for a new developer to deal with it. Staging is updated to match the new development VMs in preparation.</li>
</ol>
<strong>Developer Workflow</strong>
<ol>
<li>Get assigned an issue from the issue tracker.</li>
<li>Use the vagrant VM to clone the current dev repository into the folder it shares with the host OS (so the Developer can use their favorite IDE).</li>
<li>Developer commits changes and tests locally.</li>
<li>When satisfied, Developer merges his changes to the dev repository. If conflicts, work with the Developer commited the conflicting code to resolve the issue. </li>
<li>When Dev is in a stable state, Dev is merged with the current Staging repository for QA of the new features. Nothing is pushed from Dev to Staging until Step #6 is completed. Hooks generate new copy of documentation for Staging.</li>
<li>Staging is cloned into Production once the QA is completed. Hooks generate new copy of documentation for Production.</li>
</ol>
Is there any obvious flaws/pitfalls in the above or steps that are generally considered 'best practices' that should be added?
|
Your plan sounds great! I think you are off to a really good start.
My only advice is in regard to you Developer Workflow. I think you're <code>dev</code> branch may become problematic, because developers will be merging code willy-nilly then <em>they</em> think its ready.
If both branchA and branchB, C, and D are merged into <code>dev</code>, and it fails, you can't be certain which parts are necessarily failing. Also, if someone pushes up something with a conflict, you dont know who it was. Madness and insanity are bound to ensue.
If a feature turns out not to be ready, and code needs to be backed out of <code>dev</code>, other developers will have already pulled down their additions. Those developers then will merge back into <code>dev</code>, unwittingly re-introducing the broken code. Madness and insanity are bound to ensue.
You're going to need a couple steps of separation to keep untested code away from tested code.
This all depends on the skill sets of your team, and how you actually work together. What follows is my solution to problems of a quickly expanding team with differing levels of git knowledge and different levels of developers code dependability.
I always try to tell people to not simply think about developer workflow, but testing procedure, and release process. They all need to be planned as part of a singular process.
<ol>
<li>Lead Dev or Release Mngr (whoever) creates a new <code>release</code> branch based off <code>master</code>. This branch will be a container for anything going on it in the next release.</li>
<li>Lead/ReleaseMngr creates <code>integration</code> branch based off release.</li>
<li>Developer creates new feature branch (or topic branch, whatever you want to call it), based off the current release branch.</li>
<li>Developer tests locally, is happy.</li>
<li>The developers feature branch deployed somewhere and tested by QA <strong>independently</strong> of any untested code.</li>
<li>QA signs off on feature - it gets merged into an <code>integration</code> branch. (ideally, IMHO, only after the feature branch has been rebased off <code>release</code>, to force the conflict resolutions )</li>
<li>QA tests the <code>integration</code> branch which is just the <code>release</code> branch + this one feature.</li>
<li>QA signs off - integration is merged into release (if not signed off, integration is blown away and recreated based on <code>release</code>). This is the reason for <code>integration</code>. No one pulls form this branch, and it can be blown away as needed.</li>
<li>Now the feature is in release, and can be shared with other developers making features based off <code>release</code> branch.</li>
<li>Release is good, merge to master. Deploy, make a new release branch based off master.</li>
</ol>
I know it sounds like a lot, but it really will save you from the headaches and, in my experience, are inevitable with large projects with logs of people having differing levels of knowledge.
If you have a small team with a simple release process, or a team that is very experienced - all this may not be necessary - but do be aware of the inherent problem with testing multiple people's code at the same time in your <code>dev</code> branch.
All that said, its my understanding the GitHub team just lets everyone merge into master directly (after a brief code review) and auto deploys ~30 times a day.
|
The general outline sounds good. The only thing I would change is that the base boxes themselves should be self-updating. Presumably your provisioning scripts also live in git so you could automatically update the boxes by a properly configured <code>/etc/rc.local</code> script. The base box should just have the bare minimum necessary to start and then download all the necessary pieces for configuration. This way you decouple the distribution of the boxes from the actual update process and don't need to ship an entire new base box every time you make changes, excepting OS upgrades of course.
|
https://softwareengineering.stackexchange.com
|
2,029
|
[
"https://engineering.stackexchange.com/questions/2029",
"https://engineering.stackexchange.com",
"https://engineering.stackexchange.com/users/858/"
] |
I have an 80 g·cm motor with a rotational frequency of 15,000 rpm. I want to lift a weight of 2 kg at a speed of 0.5 m/s. How do I calculate the gear ratio required?
|
<blockquote>
I have a 80gcm motor with a rpm of 15000.<br>
I want to lift a weight of 2 kg at a speed of 0.5 m/s.<br>
How do I go about calculating the gear ratio required for this?
</blockquote>
<strong>Firstly - is it possible?</strong>
In particular, is there enough input power available for the desired output power?
To within about 2% a very handy formula applies - it can be derived conventionally and seen that several factors happen to cancel nicely.
Watts = kg x metres x RPM
80 gram∙cm = 0.080 kg x 0.01 m
So for input W = 0.080 kg x 0.01 m x 15000 = 12 Watts.<br>
This is the maximum Wattage you can deliver if properly geared at 100% efficiency<br>
(we should be so lucky).
Desired power = Force x distance per unit time<br>
Watts = Joules/sec = mg∙d/s
= 2 kg x $g$ x 0.5 m/s = 2 x 9.8 x 0.5 = 9.8 Watts
So to work at all overall efficiency needs to be at least 9.8/12
or greater than about 82%.<br>
That's potentially doable but also potentially difficult.
<strong>Now to the actual problem.</strong>
The following assumes that the output weight or force is taken from the end of a radius of the driven "gear". If output is instead taken from eg a windlass drum at lower diameter to the driven gear the ratios will be scaled based on the relative diameters. Ignore that for now.
Torque_in x RPM_in = Torque_out x RPM_out at 100% efficiency
or RPM_out = Torque_in x RPM_in / Torque_out at 100% efficiency
So:
RPM out = 0.080 kg x 0.01 m x 15000 RPM / (2 kg x 0.5 m) = 12 RPM
So gear ratio = 15000/12 = 1250:1
The specification of not just output Torque but actual force (2 kg x $g$) constrains the actual output pulley size if output is taken off at the pulley radius.
|
If I understand correctly the problem is like this
<img src="https://i.stack.imgur.com/PjnkN.png" alt="enter image description here">
The velocity of the load is $R\omega=0.5=R\underbrace{\frac{\pi}{30}\frac{15000}{n}}_\omega$
Solving for $n$ we get
$$n=1000 \pi R$$
|
https://engineering.stackexchange.com
|
194,655
|
[
"https://softwareengineering.stackexchange.com/questions/194655",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/81031/"
] |
I am confused when I read this (regarding singleton design pattern):
<blockquote>
How do we ensure that a class has only one instance and that the instance is easily
accessible? A global variable makes an object accessible, but it doesn't keep you from
instantiating multiple objects.
</blockquote>
So what is the use of singleton pattern if we can create multiple instances?
<strong>SOURCE:</strong>
Design Patterns - Elements Of Reusable Object Oriented Software (1995) - Gamma, Helm, Johnson, Vl
|
Without the full text this is not sure, but my (somehow educated) guess:
They only warn that a global variable is not the right way to ensure that you have a singleton. The following text should then show how to do this inside the class that should be a singleton.
|
In the quote they don't talk about how to do it, but how <strong>not</strong> to do it.
Some approach for getting a <em>singleton</em> is to make the <em>constructor private</em> and write a own static method, that creates a new element on the first call, saves it in a static variable and always returns this object when called again.
|
https://softwareengineering.stackexchange.com
|
732,885
|
[
"https://physics.stackexchange.com/questions/732885",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/348701/"
] |
I'm currently learning Newton's Laws, and I came across this question:
<blockquote>
"A man lightly walks across a roof so that he does not break it. However, his friend tosses him a tool, causing him to jump in the air, and break through the roof. Why does the roof hold his weight when he walks, but not when he jumps?"
</blockquote>
My confusion stems from the fact that in both cases there is a force of gravity and normal force acting upon him when he is on the ground. The only difference is his acceleration. And yet, if there is acceleration, there must be unbalanced forces, but what are those unbalanced forces caused by? How does he break through the roof?
|
Ordinarily, when the man is standing or walking on the roof, his weight is acting downwards and is cancelled by a corresponding upward force from the roof, equal in magnitude with his weight.
If the man jumps in the air, his legs have to impart a force greater that his weight in order to accelerate his body, so the roof has to bear that extra downward force from his legs, which it might safely do.
When the man lands back on the roof after jumping in the air, the man's body must be suddenly brought to a stop, which will require a momentary force far in excess of his weight, and that is too much for the roof to bear, so the man falls through it.
You might find it helpful to consider how the roof responds to forces imposed on it. If the roof is supported by timber rafters, say, the rafters will sag somewhat under and imposed load. The sagging bends the wood and sets-up extra 'restorative' forces that resist further sagging, much as a spring under load compresses to the point at which the restorative force caused by the compression equals the applied load.
If a man stands on the roof, the rafters sag some more until the restorative forces within the timbers cancel out the man's weight, at which point they sag no further. If the man lands on the roof at some speed, the impact causes the beams to sag beyond the point at which the rafters will break, so the roof can no longer generate a restorative force to support the man.
|
You have already correctly identified the forces. The unbalanced force is simply the normal force being <em>larger</em> than before.
The normal force must now, apart from holding back against his weight, also create the upwards acceleration. Thus it grows.
If the surface is not strong enough to exert this new larger normal force, then the surface will break.
|
https://physics.stackexchange.com
|
198,964
|
[
"https://dba.stackexchange.com/questions/198964",
"https://dba.stackexchange.com",
"https://dba.stackexchange.com/users/145699/"
] |
I am using SQL server and I can't seem to construct the query I want. I have a table with several columns, among them are
PARAMETER_NAME, GW_LOCATION_ID, Report_Result, DETECT_FLAG
I want a query to return
<ol>
<li>a row for each unique Parameter-Location combination </li>
<li>the maximum value of the <strong>Report_Result</strong> column for that unique Parameter-Location combination and </li>
<li>the <strong>DETECT_FLAG</strong> value associated with the maximum value. </li>
</ol>
There are other columns in addition to <strong>DETECT_FLAG</strong> that I want to return but I think that if I can get this part worked out, I should be able to return the other columns similarly.
The query works if I group by <strong>PARAMETER_NAME</strong> and <strong>GW_LOCATION_ID</strong> and aggregate the <strong>Report_Result</strong> column (see below). However, when I add the <strong>DETECT_FLAG</strong> column, I get the error, "<em>Column 'SLVs_Flagged.DETECT_FLAG' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.</em>" What I want is the value of DETECT_FLAG for the row returned by MAX(<strong>Report_Result</strong>).
<pre><code>SELECT
PARAMETER_NAME, GW_LOCATION_ID, MAX(Report_Result)
FROM SLVs_Flagged
GROUP BY PARAMETER_NAME, GW_LOCATION_ID
ORDER BY PARAMETER_NAME, GW_LOCATION_ID;
</code></pre>
I have tried to do a subselect to return the <strong>DETECT_FLAG</strong> value that corresponds to MAX(<strong>Report_Result</strong>) but I try to use a WHERE condition and I get another error. Please advise how I can execute this query.
Here is a subset of the data for testing.
<pre><code>PARAMETER_NAME GW_LOCATION_ID Report_Result DETECT_FLAG
Perchlorate CDBO-6 2.38 N
Perchlorate CDBO-6 1.45 N
Perchlorate CDV-16-02655 4 N
Perchlorate CDV-16-02655 0.537 Y
Perchlorate CDV-16-02655 4 N
Perchlorate CDV-16-02656 100 N
Perchlorate CDV-16-02656 0.394 Y
Perchlorate CDV-16-02656 4 N
Perchlorate CDV-16-02656 4 N
Perchlorate CDV-16-02657 4 N
Perchlorate CDV-16-02657 4 N
Perchlorate CDV-16-02657 4 N
Perchlorate CDV-16-02657 0.174 Y
Perchlorate CDV-16-02658 4 N
Perchlorate CDV-16-02658 4 Y
Perchlorate CDV-16-02658 0.126 Y
Perchlorate CDV-16-02658 0.0561 Y
Perchlorate CDV-16-02658 20 N
Perchlorate CDV-16-02658 4 N
Perchlorate CDV-16-02659 4 N
Nitrate as Nitrogen R-16 S4 0.003 N
Nitrate as Nitrogen R-20 S1 0.003 N
Nitrate as Nitrogen R-20 S1 0.003 N
Nitrate as Nitrogen R-20 S1 0.003 N
Nitrate as Nitrogen R-20 S2 0.003 N
Nitrate as Nitrogen R-20 S2 0.003 N
Nitrate as Nitrogen R-20 S3 0.003 N
Nitrate as Nitrogen R-20 S3 0.003 N
Nitrate as Nitrogen R-20 S3 0.003 N
Nitrate as Nitrogen R-27 0.003 N
Nitrate as Nitrogen R-31 S2 0.003 N
Nitrate as Nitrogen R-32 S3 0.003 N
Nitrate as Nitrogen R-32 S3 0.003 N
Nitrate as Nitrogen Test Well 1A 0.01 N
Nitrate as Nitrogen Test Well 1A -0.01 N
Nitrate as Nitrogen Test Well 2 0.04 N
Nitrate as Nitrogen Test Well 2 0.01 N
Nitrate as Nitrogen Test Well 2A 0 N
Nitrate as Nitrogen Test Well 3 0.04 N
Nitrate as Nitrogen Test Well 3 0.04 N
Nitrate as Nitrogen Test Well 4 0.04 N
Nitrate as Nitrogen Test Well 4 0.04 N
Nitrate as Nitrogen Test Well 4 0.04 N
Nitrate as Nitrogen Test Well 4 0.04 N
Nitrate as Nitrogen Test Well 4 0.04 N
Nitrate as Nitrogen Test Well 4 0.04 N
Nitrate as Nitrogen Test Well 4 0.04 N
Nitrate as Nitrogen Test Well 8 0.04 N
Nitrate as Nitrogen Test Well 8 0.04 N
</code></pre>
|
This is a <code>greatest-n-per-group</code> problem and there are many ways to solve it (<code>CROSS APPLY</code>, window functions, subquery with <code>GROUP BY</code>, etc). Here's a method using window functions and a CTE:
<pre><code>WITH ct AS
( SELECT *,
rn = RANK() OVER (PARTITION BY PARAMETER_NAME, GW_LOCATION_ID
ORDER BY Report_Result DESC)
FROM SLVs_Flagged
)
SELECT PARAMETER_NAME, GW_LOCATION_ID,
Max_Report_Result = Report_Result,
DETECT_FLAG
-- more columns
FROM ct
WHERE rn = 1
ORDER BY PARAMETER_NAME, GW_LOCATION_ID ;
</code></pre>
The query will return all tied results (if there are ties). If you want a single result for every <code>(PARAMETER_NAME, GW_LOCATION_ID)</code> combination, you can resolve ties by using <code>ROW_NUMBER()</code> instead of <code>RANK()</code> and modifying the <code>ORDER BY</code> inside the <code>OVER (..)</code> clause. Eg. (prefer <code>DETECT_FLAG</code> with <code>N</code> over <code>Y</code>):
<pre><code> rn = ROW_NUMBER() OVER (PARTITION BY PARAMETER_NAME, GW_LOCATION_ID
ORDER BY Report_Result DESC, DETECT_FLAG)
</code></pre>
|
I think the following query is more intuitive to understand:
<pre><code>create table SLVs_Flagged (PARAMETER_NAME varchar(30), GW_LOCATION_ID varchar(30), Report_Result int, DETECT_FLAG char(1) )
go
insert into dbo.SLVs_Flagged
values ('abc', 'cor1', 128, 'N'), ('abc', 'cor1', 12, 'Y')
, ('def', 'cor1', 500, 'Y'), ('def', 'cor1', 50, 'N')
go
;with s as (
select max(Report_Result) as Report_Result, PARAMETER_NAME, GW_LOCATION_ID
from SLVs_Flagged
group by PARAMETER_NAME, GW_LOCATION_ID)
SELECT s.Report_Result, s.PARAMETER_NAME, s.GW_LOCATION_ID, t.DETECT_FLAG
FROM SLVs_Flagged t
inner join s
on t.PARAMETER_NAME = s.PARAMETER_NAME and t.GW_LOCATION_ID = s.GW_LOCATION_ID and t.Report_Result = s.Report_Result
</code></pre>
|
https://dba.stackexchange.com
|
39,800
|
[
"https://electronics.stackexchange.com/questions/39800",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/12134/"
] |
Let say I have a lamp which is operated on 220 V ac.
Instead of applying 220 AC voltage I apply 220 DC voltage: what will happen? And if I want to convert this 220V dc to 220V ac then what should I do?
|
An incandescent bulb will do fine. The 220 V AC is the RMS value, for Root Mean Square. The sine's amplitude will be \$\sqrt{2}\$ higher than that, or 310 V. But the RMS value tells you what equivalent DC voltage you would need to get the same power, so that's exactly what you need. The bulb will use the same power and light as bright under 220 V AC as DC.
Switching on an incandescent bulb may cause a large current peak: the cold resistance is only about a tenth of what it is when the lamp is lit, and when the voltage applied is high at that time the lamp may break. You may have noticed that <em>if</em> a bulb breaks it always does when switching on. So at AC worst case is when you switch on at the peak of the sine, at 310 V. But there will be lots of cases when the voltage is lower when switching on, even zero if you just happen to switch on during a zero-crossing of the sine. In fact that's the best thing to do for the bulb's longevity.
At DC you don't have this; anytime you switch it on it will be 220 V. Not as bad as 310 V, but you can't use zero-crossing switching either.
<strong>about RMS</strong><br>
Why do we use RMS value instead of just the average? The average of a sine is just zero, so that doesn't help at all. If we want to know how much power a voltage generates in a load we have to use the power equation
\$ P = V \times I = \dfrac{V^2}{R} \$
It's the second form we're interested in. Power is proportional to voltage squared, that's what the "S" in RMS comes from, we square the voltage.
<img src="https://i.stack.imgur.com/gHCqt.png" alt="enter image description here">
The blue sine is our AC voltage, 1 V peak. The purple curve is that voltage squared, and the yellowish is the average of that, or mean: the "M" in RMS. It's precisely 0.5 V\$^2\$. It has still the dimension of voltage squared, so to get to a voltage quantity we take the square root of that, that's \$\frac{\sqrt{2}}{2}\$ V. The "R" in RMS. So RMS spelled out in full means: "the square root of the average of the voltage squared".
This shows that the amplitude (1 V) is \$\sqrt{2}\$ higher than the RMS value. That's where the 310 V comes from: 220 V \$\times \sqrt{2}\$ = 310 V.
|
YES your incandescent lamp will run on DC in fact Thomas Edison who had a design contract to improve the reliability of the lightbulb and hence commercialise it did all his experiments on DC Thomas Edison wanted all power to be DC but lost to Nicholas Tesla NOW a word of warning DC arcs much worse than AC so unless your switch is huge and 100years old then check its rating Many 230Vac switches are only rated for 28 Vdc
|
https://electronics.stackexchange.com
|
14,480
|
[
"https://cs.stackexchange.com/questions/14480",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/10249/"
] |
I'm willing to take a course in formal languages and automata theory , where we will explore side by side a functional programming language to implement the different algorithms we will encounter ,despite i am new to the language , i am assuming i've learned what's nessecary to move to the theoretical part , so i'm wondering what are the must-knows of FL & AT to a programmmer , that is what are the most fundamentals algorithms one necesserily have to know while studying formal languages and automata theory ?
|
I have looked into this question before and came up very short. From it, I believe that it may be the case that a cipher with this property is simply the Vernam Cipher (OTP) in disguise. It is still an interesting question as to how the use of additional resources or changes in protocol can potentially implement an equivalent to OTP with less effort. While it, itself, might not be exactly considered a cipher (or even a huge deviation from the one time pad), I found the following information theoretically cryptosystem to be incredibly interesting when I stumbled upon it a couple years back:
There is a more recent scheme (early 2000's), by Michael Rabin, called <em>hyper-encryption</em>, that can be shown to be information theoretically secure given the assumption that your adversary is space bounded (most others have assumptions on time bounds). As required by the scheme, there must be a very large, public, source of random bits.
The basic premise is for the sender and recipient to use the key to methodically pick out a one time pad from the random source of bits (using randomness extractors). In order to get this pad, the adversary would have to not only store the ciphertext, but also would need to store almost all of the random bits for use later. Their limited ability to store information would prevent them from doing this.
After the transmission is complete, access to the random bits is closed. The theory here is that, even if the key was revealed afterwards, that the adversary would not have enough information to establish anything about the plaintext. This remains true even if they had unlimited computational power, since no amount of post-analysis will get around the fact that the one time pad used depended on information that the adversary did not store. Since this security is not compromised even if the key is revealed after the transmission, the scheme is said to have <em>perfect forward secrecy</em>.
|
Any perfect cipher that you will ever encounter will be a one time pad or a variant of it.
|
https://cs.stackexchange.com
|
4,395
|
[
"https://chemistry.stackexchange.com/questions/4395",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/1160/"
] |
I'm doing exercises on hybridisation, and I was given this molecule:
<img src="https://i.stack.imgur.com/2sGV9.png" alt="enter image description here">
I'm wondering about this (electron deficient) oxygen atom. My intuition says it should be ${sp^2}$ like the answer says, but honestly I only see that it's $sp$ hybridised. Is it because you ask what it would look like if it was not electron deficient?
|
This would become much clearer if you show the lone pair on the oxygen atom.
One $p$ orbital of the Oxygen atom forms the $\pi$ bond with Carbon, while the other three (remaining one $s$ and two $p$) orbitals are $sp^2$ hybridized.
Two $sp^2$ orbitals are used to form the $\sigma$ bonds while the last $sp^2$ orbital is occupied by the lone pair.
|
You can find the hybridization of the atom by finding its steric number:
steric number = no of atoms bonded (to the atom you are finding the hyb. of) + lone pairs with that atom.
if Steric no comes to be 4, there is $\ce{sp^3}$ hybridization in the atom.
if steric no comes to be 3, there is $\ce{sp^2}$ hybridization in the atom.
if steric no comes to be 2, there is $\ce{sp}$ hybridization in the atom.
|
https://chemistry.stackexchange.com
|
169,061
|
[
"https://security.stackexchange.com/questions/169061",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/158585/"
] |
I am in the process of developing an iOS app where a customer can earn rewards (money) to spend back at the business. When the customer wants to spend the money they have earned, the cashier will use the <strong>employee app</strong> to scan the customer's QR code on the <strong>customer app</strong>. To clarify, yes, there are 2 apps. One for the customer (where their identifying QR code is) and one for the employee to scan the QR codes and process the payment. My question is, how can I make this transaction secure (or as secure as possible)?
Here is what I am thinking as a potential plan, please correct me where I am wrong:
<ol>
<li>When the customer logs in, their credentials are stored in Keychain (of course password is encrypted): customer_id, customer_email, customer_password.</li>
<li>Along with these 3 separate identifying pieces of information, I will also send an encrypted string consisting of those 3 pieces of information. This encrypted string will end up being the QR code value. For example, the encrypted version of "customer_id|customer_email|customer_passord" or "19302|example@gmail.com|password123". The | are used to separate the data so it can be processed later.</li>
<li>I will encrypt this 3-piece identifier server side since doing any encryption on the app side (client) is probably risky. So the following 4 pieces of data come to the user's app: customer_id, customer_email, customer_password (encrypted), 3_piece_identifer (encrypted).</li>
<li>Now the user has the encrypted 3_piece_identifer in their Keychain in the app. This 3_piece_identifer will be the QR code that will be scanned by the employee. Once again, 3_piece_identifer is "19302|example@gmail.com|password123" encrypted just as an example.</li>
<li>So now the employee uses the employee app to scan the customer's QR code (which resides on the customer app). The employee just received the encrypted 3_piece_identifer.</li>
<li>Still on the employee app side, the 3_piece_identifer is now decrypted and processed server-side then check in the database for this customer's additional information like current balance.</li>
</ol>
Am I doing it completely wrong?
|
I think you are protecting against your threat the wrong way, and I think you're also missing out on protecting against some additional threats.
<strong>Hashing vs encryption</strong>
First and most important: the end user's password should never be encrypted. It should only ever be hashed. Encryption is reversible but hashes aren't. In the event of system breaches, you don't want someone to be able to decrypt an encrypted user password. That could just be a simple matter of accidentally using the wrong terminology: its easy to confuse the two.
<strong>The goal</strong>
Before jumping in, it helps to reiterate the big picture: what are you trying to do here? You are trying to make it easy for a kiosk to identify a user securely so that they can use a pre-paid account balance.
<strong>Breaking it down</strong>
The simplest solution would be to simply use the user's unique id to build the QR code. This would get the job done very easily. The problem with that (which your solution is trying to solve), is that the user id is probably guessable, so someone might easily figure out how to build your QR codes and start generating them for random user ids until they find someone with lots of money on their account.
Your use of the three piece identifier is designed to circumvent this problem. The goal is to make it so that only the actual user is able to generate the QR code and use their money. The issue I see is that you have used the user's password in the QR code generation process. This has the benefit of making it hard for anyone other than the user to generate the QR code, but it has a big disadvantage of increasing the "surface area" for the user's password to be stolen. Inherently, the more you use the user's password, and the more it transfers back from client to server (in any form), the more opportunities there are for a malicious user to steal it and use it to break into the actual account. Granted, doing so would require understanding the form of your 3_piece_identifier and then doing a brute-force on the password, but such things are not outside of the realm of possibility, even if it isn't the highest on the list of concerns.
<strong>An important attack vector</strong>
However, your scenario doesn't protect against a simpler and much easier attack vector: simple replay attacks. Nothing in your scenario ever causes the QR code to change, so if anyone could simply get a picture of the QR code created by the authorized user, that is all that would be necessary to pretend to be them in the future. It could be as simple as literally taking a picture of someone else's phone while they have the QR open (maybe the person standing in front of you in line?) and then opening up that picture from your phone's gallery, and scanning the picture of the QR code when you checkout. Unless the people running the cash register are paying attention (hint: they usually don't), you get to use someone else's balance with little effort.
<strong>One possible way to solve all of this</strong>
As a result, what you have isn't going to work (I don't think). I think you should not bother using the password as part of the verification, and you need something to protect against replay attacks. It can actually be very simple: when the user loads up the QR code on their phone they can hit up an API server-side to request a one-use key. This will be generated server side, and it can even just be a completely random string: it doesn't have to include the user's email/id/password, but instead is simply associated with that user's account. Make sure you and use a cryptographically secure random number generator, of course. Make it nice and long (as long as it fits well in a QR code), make sure it is unique, store it server side associated with the customer along with an expiration time (no more than a couple minutes even), and then send it down to the client. The client application turns it into a QR code, the employee app scans it, sends it back up, the server verifies that it is valid, that it hasn't expired, and can now bill the client's account properly. As long as the API endpoint that generates the one-use-key requires a logged in user to work, everything should be nice and secure (or at least, as secure as the user's account is). To recap:
<ol>
<li>User logs in</li>
<li>User requests to pay with the account balance</li>
<li>Client sends up request of single-use-key</li>
<li>Server generates (via CSPRNG) a unique single-use-key and stores it with the user's account, along with an expiration time</li>
<li>Server sends single-use-key back down to client</li>
<li>Client turns single-use-key into QR code</li>
<li>Employee app scans QR code and converts back to single-use-key</li>
<li>Employee app sends up single-use-key to server</li>
<li>Server verifies key's validity and can now charge the proper account</li>
</ol>
I would also be curious to see what suggestions other's may have.
|
Conor's solution is pretty great, but have draw backs. For example, the question that @user3451821 asked in the comment section: What should I do if the customer don't have access to the Internet. Conor's solution can't solve this. Another concern is, the server is required to do too much stuff. From generating one-time password, store it in SQL server, push it to customer device, then pull it out to verify, finally delete it and deduct the balance. This takes up a lot of compute and network resources, especially you have large number of customers using your service at the same time.
So I bring a little improvement but require a bit more experience in cryptography and tweak the processing flow but worth for decreasing the server workload, and eliminates the use of the Internet (at the client side, not during registration or log-in phase).
Registration
<ol>
<li>select "create new account"</li>
<li>fill the required data</li>
<li>User's devices generate a Ed25519 key pair</li>
<li>upload key pair's public key to server</li>
<li>server save the public key along with other user details</li>
</ol>
when user want to use the balance/credit
<ol>
<li>user open the app</li>
<li>Select "generate my QR code"</li>
<li>sign the user id, nonce and time stamp with the key pair private key, like so: <code>signature=SIGN("user_id|time_stamp", priv_key)</code></li>
<li>put <code>user_id-timestamp-signature</code> or any structure into a QR code and display it. Note, you must use the same timestamp that used in the signature process, and must pass all 3 arguments(user id, timestamp and the signature) to the server for verification purpose.</li>
<li>employee scan the QR code, sent it to the server for verification</li>
<li>server pull the public key from SQL server, and verify the signature</li>
<li>if all good, deduct the balance in the user's account, and return a success value to employee's app</li>
</ol>
In this solution, the server is only responsible for store & read the user's public key, and preform a signature verification. Nothing more.
|
https://security.stackexchange.com
|
533,472
|
[
"https://electronics.stackexchange.com/questions/533472",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/116972/"
] |
Sorry if my question is very basic, but I couldn't find the answer by searching the internet.
I want to buy a JTAG programmer, but in the datasheet of this programmer there is a list of "supported MCUs". Does this mean that this programmer cannot program other MCUs which support JTAG?
As an example, ST Link V2 says that it is for STM8 and STM32 MCUs. Does this mean it cannot be used to program other MCUs?
What about JTAG programmers which are for ARM MCUs? Does this mean they cannot be used for non-ARM MCUs (even thought they support JTAG)?
If a JTAG programmer could be used to program any MCU which supports JTAG, then what is the difference between these programmers that makes them specific to a specific microcontroller?
I am mainly concerned with programming the MCU, although having debugging capability would be a nice addition.
Any help would be welcome.
Thank you in advance.
|
JTAG is the interface, the voltages, the max clock speeds, that make up the physical connection.
To interface to a particular device, the programmer needs to know its specific commands, its registers, that is the device API (Application Programmer Interface).
A JTAG was originally designed as a test standard, it only supports a small set of test-related functions as standard, for instance boundary scan. However, the general idea of connecting a PC and a target system together with a simple interface is so appealing, that many vendors have added their own extensions, for programming, breakpoints, and all sorts of other useful things. These extensions are not standardised.
While JTAG can be 'bit-banged' from a PC, this is very slow, and most 'JTAG Programmers' incorporate a local MCU which speeds up the process, connectced to the target by a short JTAG lead, and commonly to the PC by USB or ethernet. The PC to programmer communications are often proprietary, and unlikely to be available to amateurs. The programmer will have embedded firmware to control the target's JTAG extension protocols, you may have more luck finding these available publicly from the likes of ARM etc.
This means if you want to go outside the published 'this programmer works with this MCU', you're more likely to be able to hack the target's JTAG directly from a PC, than to hack a commercial programmer to work with a target not claimed to work with it. Get the API from the target MCU manufacturer, use an Arduino as the physical JTAG interface, and write C or python or something to control the Arduino over its USB interface, putting the low level tasks on the Arduino and the high level control on the PC as required.
|
In theory, yes. In practice, the JTAG extensions to support debugging are device-specific, and even in some cases need local hardware acceleration (like ARM fast trace.)
MIPS platforms have long used a standard extension called E-JTAG. Some others followed that basic E-JTAG definition as a starting point for their proprietary debuggers.
One thing to watch for: many JTAG solutions use a popular USB interface IC, the FTDI FT232C. This chip is used in the Digilent adapters for example. If your toolchain supports this IC it's likely you could use a third party adapter, or even design the chip right in to your board (like some FPGA boards do.)
|
https://electronics.stackexchange.com
|
536,699
|
[
"https://physics.stackexchange.com/questions/536699",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/136556/"
] |
Given a metric <span class="math-container">$g_{\mu\nu}$</span>, is there a general recipe to define the null coordinates for the system?
For the Schwarzschild metric, I think we can find it by the following method.
<span class="math-container">$$ds^2 = -f(r)dt^2 + \frac{1}{f(r)}dr^2$$</span>
For a null coordinate <span class="math-container">$ds^2=0$</span>, which gives
<span class="math-container">$$\int dt = \pm \int \frac{dr}{f(r)} + const.$$</span>
So, the null coordinates are <span class="math-container">$t\pm\int \frac{dr}{f(r)}$</span>. Is this method correct?
What about more complicated metrics that have off-diagonal terms like <span class="math-container">$f(r,t) dt dr$</span>? What do we do in those cases?
|
I don't know if it is the best way to look for such coordinate (probably not), but for general metric <span class="math-container">$$ds^2=g_{tt}dt^2+2g_{tr}dtdr+g_{rr}dr^2$$</span>
on given timelike subspace you can look for coordinate transformations <span class="math-container">$t\rightarrow t'(t,r), r\rightarrow r'(t,r)$</span> that takes metric into the form
<span class="math-container">$$ds^2=2g_{t'r'}dt'dr'+g_{r'r'}dr'^2.$$</span> The coordinate <span class="math-container">$t'$</span> will then be null, since <span class="math-container">$g(\partial_{t'}, \partial_{t'})$</span> is trivially zero.
Transforming the metric and collecting <span class="math-container">$dt'^2$</span> terms you get the condition:
<span class="math-container">$$
0=g_{tt}\left(\frac{\partial t}{\partial t'}\right)^2+2g_{tr}\frac{\partial t}{\partial t'}\frac{\partial r}{\partial t'}+g_{rr}\left(\frac{\partial r}{\partial t'}\right)^2
$$</span>
<strong>Edit</strong> to answer the comment:
The solution of above differential condition is certainly not unique. At the subspace, there are two null directions at every point, but once you pick one direction at one point, you need to (in general) stick with it, because the coordinate vector field <span class="math-container">$\partial_{t'}$</span> needs to be smooth. So the direction at every point is (in general) unique, but not so the "length" of vectors (the actual length is zero, but one can still compare two vectors of same direction by the parameter <span class="math-container">$\alpha$</span> from equation <span class="math-container">$v_1=\alpha v_2$</span>) of the coordinate vector field. Obviously any vector field of the form
<span class="math-container">$$h(r',t')\partial_{t'}$$</span>
is still null. For such vector field to lead to a coordinate, you need to be able to find vector field <span class="math-container">$\partial_{r''}=u_{t'}(r',t')\partial_{t'}+u_{r'}(r',t')\partial_{r'}$</span> with vanishing commutator:
<span class="math-container">$$0=[h(r',t')\partial_{t'},\partial_{r''}]=h \partial_{t'} u_{t'}+ h \partial_{t'} u_{r'}-u_{t'}\partial_{t'} h -u_{r'}\partial_{r'} h.$$</span>
Such vector field you can find by requiring these two pairs to vanish independently:
<span class="math-container">$$0=h \partial_{t'} u_{t'} - u_{t'}\partial_{t'} h $$</span>
<span class="math-container">$$0= h \partial_{t'} u_{r'}-u_{r'}\partial_{r'} h $$</span>
These can be rewritten to the form :
<span class="math-container">$$ \partial_{t'} u_{t'} =H_{t'} u_{t'} $$</span>
<span class="math-container">$$\partial_{t'} u_{r'}=H_{r'}u_{r'},$$</span>
where <span class="math-container">$H_{i}=\partial_i h / h$</span> are known functions.These first order partial differential equations can be always solved, so indeed every vector field of the form <span class="math-container">$$\partial_{t''}=h(r',t')\partial_{t'}$$</span> leads to null coordinate also (assuming <span class="math-container">$h \neq 0$</span>). Which one of all possible null coordinates you want is then up to you and the application.
|
Null coordinates are, as the name imply, coordinates along which some null curves flow. A simple way to do this is to use the flow-box theorem.
Take two vector fields, <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>. <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are null vector fields if <span class="math-container">$g(X,X) = g(Y,Y) = 0$</span>, and <span class="math-container">$X, Y \neq 0$</span>. There's a variety of such vector fields you can take, roughly corresponding to the rotation along the null cone at each point.
If, in addition, <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are nowhere proportional (ie <span class="math-container">$X \neq \alpha Y$</span>), you can define locally some coordinates <span class="math-container">$(u,v)$</span> such that <span class="math-container">$X = \partial_u$</span>, <span class="math-container">$Y = \partial_v$</span>, by taking <span class="math-container">$u$</span> and <span class="math-container">$v$</span> to be parameters of the integral curves of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>.
Example : for
<span class="math-container">\begin{equation}
ds^2 = -f(r) dt^2 + \frac{1}{f(r)} dr^2 + r^2 d\Omega^2
\end{equation}</span>
We are looking for two null vector fields, obeying the equation
<span class="math-container">\begin{equation}
-f(r) X_t^2 + \frac{1}{f(r)} X_r^2 + r^2(X_\theta^2 + \sin^2\theta X_\varphi^2) = 0
\end{equation}</span>
There's a variety we can use, but the simplest case is <span class="math-container">$X_\theta = X_\varphi = 0$</span>. So we are left with
<span class="math-container">\begin{equation}
X_r^2 = f^2(r) X_t^2
\end{equation}</span>
Two simple choices are <span class="math-container">$X = (1, f(r), 0, 0)$</span> and <span class="math-container">$Y = (1, -f(r), 0, 0)$</span>. It's not too hard to see that they are not proportional, and they are nowhere vanishing, as long as <span class="math-container">$f(r) \neq 0$</span>.
What is the flow of those vector fields? Take a curve <span class="math-container">$x(\lambda)$</span>, its flow is
<span class="math-container">\begin{equation}
\dot{x}(\lambda) = X(x(\lambda))
\end{equation}</span>
For our vector fields, that would be
<span class="math-container">\begin{eqnarray}
\dot{x}_t(u) &=& 1\\
\dot{y}_t(v) &=& 1\\
\dot{x}_r(u) &=& f(x(u))\\
\dot{y}_r(v) &=& -f(y(v))\\
\end{eqnarray}</span>
Pretty obviously, <span class="math-container">$x_t = u$</span>, <span class="math-container">$y_t = v$</span> (I'm picking a zero constant of integration here), and
<span class="math-container">\begin{eqnarray}
u &=& \int_0^r \frac{dy}{f(y)}\\
v &=& -\int_0^r \frac{dy}{f(y)}\\
\end{eqnarray}</span>
Therefore, our new coordinates are defined by
<span class="math-container">\begin{eqnarray}
u &=& \frac{1}{2} (t + \int_0^r \frac{dy}{f(y)})\\
v &=& \frac{1}{2} (t -\int_0^r \frac{dy}{f(y)})\\
\end{eqnarray}</span>
Probably a bit more effort than this is worth, but on the other hand, it will generalize locally to any metric.
|
https://physics.stackexchange.com
|
555,301
|
[
"https://physics.stackexchange.com/questions/555301",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/265764/"
] |
I am currently learning about the Dirac formalism in quantum mechanics, but don't quite understand how we derive the expression of the quantum Hamiltonian, given the value of energy in classical mechanics.
The specific example that came up in class was that of the harmonic oscillator, for which the classical energy is <span class="math-container">$$E = \frac{p^2}{2m} + \frac{1}{2}m\omega^2x^2$$</span>
My teacher then concluded that
<span class="math-container">$$\hat{H} = \frac{\hat{p}^2}{2m} + \frac{1}{2}m\omega^2\hat{x}^2$$</span>
Why is that? The only way I see to show this is by looking at a stationary wave function <span class="math-container">$\psi (x)$</span> and using the associated Schrödinger equation. We get get that, by writing <span class="math-container">$V(x) = \frac{1}{2}m\omega^2x^2$</span>,
<span class="math-container">$$E\psi(x) = \frac{-\hbar^2}{2m}\frac{d^2\psi(x)}{dx^2} + V(x)\psi = \hat{H}\psi(x)$$</span>
By identifying known expressions for <span class="math-container">$\hat{p}$</span> and <span class="math-container">$\hat{x}$</span>, we can find the desired expression for the Hamiltonian. However, I do not feel like this method is very satisfying, as it requires to return to wave functions, and doesn't use the Schrödinger equation in the Dirac formalism.
I am getting a feeling that teachers will eagerly replace <span class="math-container">$x$</span> by <span class="math-container">$\hat{x}$</span> and p by <span class="math-container">$\hat{p}$</span> when going from classical mechanics to quantum mechanics.
Is there a more general result? Can it be said that if in classical mechanics <span class="math-container">$E = f(x_1, \dots, x_n)$</span> where <span class="math-container">$x_1, \dots, x_n$</span> are observables, then <span class="math-container">$\hat{H} = f(\hat{x_1},\dots,\hat{x_n})$</span>? I cannot see why that would be true, so is it only a coincidence that it is true in the case of the harmonic oscillator?
To summarize, is there a rule for when such replacements are valid, and if so, for which observables and how can it be proven?
|
Your teacher is being a bit sloppy in saying that you get the Hamiltonian for quantum mechanics from classical energy. You get the Hamiltonian for quantum mechanics by "quantizing" the classical Hamiltonian. OK, so what is this "quantizing"?
As you point out, Dirac came up with a fairly generalized scheme of constructing quantum theories which correspond to a given classical theory in (one of) its classical limit(s). Now, keep in mind that we're guessing a quantum theory that we hope to reduce the classical theory at hand in some classical limit. Given that the quantum theory is the more basic theory, we cannot derive it generically from its classical limit. Anyway, so the idea is that a quantum system that respects the same symmetries as the classical system would be a good guess for the quantum version of the said classical system. In Hamiltonian mechanics, the Poisson brackets capture the symmetries of the system whereas in quantum mechanics, commutators do the same job. Thus, it'd make sense to make commutators of quantum operators to follow the same relations as the Poisson brackets of classical observables in Hamiltonian mechanics. I'm not aware if Dirac explicitly used the symmetry arguments but he did realize that Poisson brackets are the central objects of Hamiltonian formalism and thus set out to find their quantum analog which he found in commutators. See, the chapter titled "Quantum Conditions" from his excellent book <em>Principles of Quantum Mechanics</em>. Once we have done this for canonical coordinates and momenta, since all observables are functions of them, we can ensure the desired commutation relations for their quantum analogs by putting hats on the canonical coordinates and momenta in their classical expressions, barring unforeseen ordering ambiguities.
This caricature description of replacing every classical canonical variable (for example, <span class="math-container">$x$</span> and <span class="math-container">$p$</span>) with a hat to get to the corresponding quantum operator is not idiot-proof. There are many subtleties involved. For example, the ordering ambiguities I mentioned. Classically, you have an observable <span class="math-container">$xp$</span>. If you put on hats, you get an operator <span class="math-container">$\hat{x}\hat{p}$</span> which cannot be an observable because it's not Hermitian (as you can check). There is an issue with it to start with. Classically, <span class="math-container">$xp$</span> is the same as <span class="math-container">$px$</span>, so which one do you choose to put on the hats? In quantum mechanics, since <span class="math-container">$\hat{x}$</span> and <span class="math-container">$\hat{p}$</span> don't commute, the two would give very different operators (and none of them will be Hermitian anyway, so none of them can be observables). We have adopted ordering procedures to deal with such issues, for example, if you say your classical observable is actually <span class="math-container">$\frac{1}{2}(xp+px)$</span> which is the same as <span class="math-container">$xp$</span> in classical mechanics, you get a Hermitian operator when you put on hats. See, for example, Weyl ordering. However, there can be multiple such ordering schemes. This comes back to the point that "quantization is not a functor" as the saying goes, the classical limit of a quantum theory doesn't uniquely determine the full quantum theory. Ultimately, we have to guess as to which quantum theory we think would reduce to the classical theory we're interested in in one of its limits.
|
Dvij D. C. is correct. In a nutshell, the relationship between classical mechanics and quantum mechanics is that the former gives a lot of insight into the latter, but quantum cannot be derived from classical. Rather, classical mechanics gives hints as to what to try, and it gives insight into what quantum formulae are saying and what kind of behaviours will result in certain limits.
So every time we say "here is something classical" and "here is something quantum" the move from classical to quantum is never a derivation. It might be clearer to say "here is something quantum" first, and then add "look, it has a similar overall structure to this classical equation, so the classical equation helps us on our journey into understanding the quantum one, and it can act as a mnemonic too."
Your suspicions, then, were largely right, but it is not quite right to call
the success of <span class="math-container">$x \rightarrow \hat{x},\; p \rightarrow \hat{p}$</span> for a harmonic oscillator a mere coincidence. There is a bit more to it than that.
|
https://physics.stackexchange.com
|
137,255
|
[
"https://dba.stackexchange.com/questions/137255",
"https://dba.stackexchange.com",
"https://dba.stackexchange.com/users/80701/"
] |
I'm using the following query to find unused indexes:
<pre><code>SELECT
PSUI.indexrelid::regclass AS IndexName
,PSUI.relid::regclass AS TableName
FROM pg_stat_user_indexes AS PSUI
JOIN pg_index AS PI
ON PSUI.IndexRelid = PI.IndexRelid
WHERE PSUI.idx_scan = 0
AND PI.indisunique IS FALSE;
</code></pre>
Should I run any stats gathering syntax or anything else before running it? Is the above query OK for such purpose? I mean, then all indexes shown in the SQL output should be just deleted?
It's a 8 year old BD, so resulting rows may be actually left overs and, I guess there should be enough stats so tell wherever and is used or not.
|
Seems like a decent approach. Of course, one should apply some human verification to this before automatically dropping everything that seems unused. For example, it's conceivable that the statistics were recently reset and/or an index is only used for some occasional batch tasks.
|
FWIW here's a query I've been using
<pre class="lang-sql prettyprint-override"><code>SELECT
relname AS table,
indexrelname AS index,
pg_size_pretty(pg_relation_size(i.indexrelid)) AS index_size,
idx_scan as index_scans
FROM pg_stat_user_indexes ui
JOIN pg_index i ON ui.indexrelid = i.indexrelid
WHERE NOT indisunique AND idx_scan =0 AND pg_relation_size(relid) > 5 * 8192
ORDER BY pg_relation_size(i.indexrelid) / nullif(idx_scan, 0) DESC NULLS FIRST,
pg_relation_size(i.indexrelid) DESC;
</code></pre>
|
https://dba.stackexchange.com
|
419,695
|
[
"https://softwareengineering.stackexchange.com/questions/419695",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/272403/"
] |
I recently refactored a program for code feasibility and maintainability; however, I am totally unaware of what software principle I did implement. I just followed my feeling.
The purpose for this post is that I want to know what software principle I did implement so that I'm capable of referencing books or websites.
Before refactoring:
<pre><code>int main(void)
{
char input_cmd[ENOUGH_SIZE] = {0};
get_input_cmd(&input_cmd, ENOUGH_SIZE);
int input_data = get_input_data(input_cmd, strlen(input_cmd));
int access_right = get_access(); //1 = admin; 0 = user
if (!strcmp(input_cmd, "A_cmd"))
{
if (access_right == admin)
{
A_calculate_data_and_save_result_to_buffer(input_data);
move_all_buffer_to_rom();
A_show_result();
return 0;
}
else
{
//error: access deny
return -1;
}
}
else if (!strcmp(input_cmd, "B_cmd"))
{
B_calculate_data_and_save_result_to_buffer(input_data);
move_all_buffer_to_rom();
B_show_result();
return 0;
}
else if (!strcmp(input_cmd, "C_cmd"))
{
C_calculate_data_and_save_result_to_buffer(input_data);
C_show_result();
return 0;
}
else if (!strcmp(input_cmd, "D_cmd"))
{
D_calculate_data_and_save_result_to_buffer(input_data);
return 0;
}
else if (!strcmp(input_cmd, "E_cmd"))
{
E_calculate_data_and_save_result_to_buffer(input_data);
move_all_buffer_to_rom();
return 0;
}
else if (!strcmp(input_cmd, "F_cmd"))
{
if (access_right == admin)
{
F_calculate_data_and_save_result_to_buffer(input_data);
move_all_buffer_to_rom();
return 0;
else
{
//error: access deny
return -1;
}
}
else if (!strcmp(input_cmd, "G_cmd"))
{
if (access_right == admin)
{
G_calculate_data_and_save_result_to_buffer(input_data);
return 0;
else
{
//error: access deny
return -1;
}
}
else if (!strcmp(input_cmd, "H_cmd"))
{
if (access_right == admin)
{
H_calculate_data_and_save_result_to_buffer(input_data);
H_show_result();
return 0;
else
{
//error: access deny
return -1;
}
}
else
{
//error: cmd not found
return -2;
}
return 0;
}
</code></pre>
After refactoring:
<pre><code>static bool find_cmd(char* input, uint16_t* index);
static bool check_access(void);
static void do_nothing(void);
typedef struct cmd_t cmd_t;
struct cmd_t
{
const char* cmd_name;
bool need_check_access;
bool need_move_all_buffer_to_rom;
void (*calculate_data_and_save_result_to_buffer)(int);
void (*show_result)(void);
};
#define YES 1
#define NO 0
static cmd_t cmd_table[] =
{
{"A_cmd", YES, YES, &A_calculate_data_and_save_result_to_buffer, &A_show_result},
{"B_cmd", NO, YES, &B_calculate_data_and_save_result_to_buffer, &B_show_result},
{"C_cmd", NO, NO, &C_calculate_data_and_save_result_to_buffer, &C_show_result},
{"D_cmd", NO, NO, &D_calculate_data_and_save_result_to_buffer, &do_nothing},
{"E_cmd", NO, YES, &E_calculate_data_and_save_result_to_buffer, &do_nothing},
{"F_cmd", YES, YES, &F_calculate_data_and_save_result_to_buffer, &do_nothing},
{"G_cmd", YES, NO, &G_calculate_data_and_save_result_to_buffer, &do_nothing},
{"H_cmd", YES, NO, &H_calculate_data_and_save_result_to_buffer, &H_show_result},
};
int main(void)
{
char input_cmd[ENOUGH_SIZE] = {0};
get_input_cmd(&input_cmd, ENOUGH_SIZE);
uint16_t idx = 0;
bool cmd_found = find_cmd(input_cmd, &idx);
if (!cmd_found)
{
return -2;
}
if (cmd_table[idx].need_check_access)
{
bool access_allowed = check_access();
if (!access_allowed)
{
return -1;
}
}
int input_data = get_input_data(input_cmd, strlen(input_cmd));
cmd_table[idx].calculate_data_and_save_result_to_buffer(input_data);
if (cmd_table[idx].need_move_all_buffer_to_rom)
{
move_all_buffer_to_rom();
}
cmd_table[idx].show_result();
return 0;
}
static bool find_cmd(char* input, uint16_t* index)
{
//loop table and compare string to find cmd
}
static bool check_access(void)
{
#define allowed 1
#define denied 0
bool rv = denied;
int access_right = get_access(); //1 = admin; 0 = user
if (access_right == admin)
{
rv = allowed;
}
return rv;
}
static void do_nothing(void)
{
}
</code></pre>
The original code is definitely not that easy to read. For example, <code>A_calculate_data_and_save_result_to_buffer</code> might be <code>set_pswd</code>; <code>A_cmd</code> might be <code>set_pswd</code>, too.
It costs me a lot of time reading into each function, knowing what each function do, brainstorming, concluding that although <strong>every function's name is different</strong>, but <strong>all functions</strong> in fact are <strong>doing the same thing on concept level</strong>, and finally forming the struct members and deciding to use a function pointer to represent all the functions.
By now, I'm still not figuring out what principle I did apply.
As title, what design/component principle did I apply? How can I further improve it?
|
Candied Orange is correct in that this is an application of <em>Replace Conditional with Polymorphism</em>. But I don't think that quiet explains why your result from this approach is so powerful.
It might be hard to see but the dispatch table is a fairly crude DSL (Domain Specific Language) which clearly expresses the problem and desired resolution without detailing how to do it. The how to do it part is implemented generically below by the consuming functions, a crude interpreter of sorts.
So why this is so powerful is that this refactoring has lifted the problem up a language level. Its <strong>Declarative Programming</strong>.
Not all applications of this refactoring will result in a Declarative Program. You could have applied the same technique and arrived at class based polymorphism which would have had the same knowledge spread throughout the code base, as the if else chain had it spread throughout the function.
The safety would have improved as the compiler could spot check some more of the implementation details (like ensuring functions are implemented), and it would have made adding/adjusting behaviour easier too without having to update each call site. But it would not have summarised the behaviour in such a simple problem specific view as the solution you arrived at.
|
Believe it or not this is a refactoring called Replace Conditional with Polymorphism. It might be hard to see that because you didn't use classes or objects to get your polymorphism. You used an array of structs. But that's still polymorphism.
This refactoring can be driven by a desire to reduce duplication or by a desire for flexibility. You have reduced duplication but since each struct resides in the same source file you haven't achieved the flexibility you might if adding a new command only required adding one new file. If that's important, you could still do that.
If you're fine editing and compiling this file every time a new command is needed then this dispatch table is fine as is. It has a much more readable form than the nested if-else structure.
|
https://softwareengineering.stackexchange.com
|
372,726
|
[
"https://physics.stackexchange.com/questions/372726",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/143440/"
] |
I know there are similar questions, but I have some arguments which seem to explain that temperature of an ideal gas in a gravitational field will be lower at higher altitudes. I am assuming that:
1.Molecules of the gas are point sized.
2.They interact only during a collision.
3.All collisions are elastic.
4.Time of collision is negligibly small.
During an elastic collision of 2 equal masses, say A & B, velocities of A and B are just exchanged. If A and B are molecules of the same gas, then they would be indistinguishable in their appearance. Even if they exchanged their velocities, using their indistinguishability and their zero size, one could say that particles passed right through each other because in reality there is no label A or B on molecules. One can't distinguish whether they collided elastically or passed through each other unaffected.
In a lump of ideal gas in a box made up of rigid walls, it is like every molecule is moving freely as if it is alone in the box. So treating each molecule as isolated, I could assert that it will slow down when moving upwards (against the gravitational field) & this applies to all the molecules in that box. Hence, the average kinetic energy at higher altitudes will be lesser than those of the lower altitudes & so the temperature will be lesser at higher altitudes. So am I wrong in concluding this? Or are the assumptions that I made too impractical?
|
The above-mentioned shell theorem explains how to calculate the gravitational field inside and outside the shell.
It's not really correct to ask what is the field exactly on the surface of the shell. Because the function $g(r)$ is not continuous at $r = R$.
But it is correct to ask what gravitational force is acting on each part of the shell from the rest of it. If the force acting on a small part of the shell is $g * dm$, then it's natural to say that the gravitational field on the surface is $g$.
So, you take a small part of the shell $dm$ and want to find the gravitational force acting on this part of the shell. Your approach is correct, the force is $ G M dm / 2 R^2$.
I can suggest another approach that gives same result.
Let's calculate the potential energy of the shell. Let's take a very small part of it and pull it out to infinity. We have spend some energy to do it: $G M dm/R$. Then we take another part of the shell, and so on. But we will take the small parts from different sides of the shell, so that the remaining mass still forms a spherical shell. It would be just thinner and thinner until is dissolves to nothing.
The energy we have to spend depends on the remaining mass. $dA(m) = G m dm /R$. This is a linear function and is easy to integrate. Total energy we spent would be $A = G M^2/2R$. And the total gravitation energy of the shell is $W=-GM^2/2R$.
Now let's inflate the sphere by $x$. It's energy would increase by: $$x * dW/dR = x * G M^2/2R^2$$
We can also calculate the energy we spent pulling each piece of the shell up by $x$. The force acting on each small piece $dm$ is proportional to $dm$: $f = g*dm$. So, the total work is $$\sum_{dm} x * g * dm = x*g*M$$
Now compare the work done and increase of energy: $$x * GM^2 / 2R^2 = x * g *M$$ $$g = GM/2R^2$$
|
Following on the above, if we let M be a constant, it seems we can then integrate GMdm/2R^2 over dm (from 0 to M) to obtain a sum or total of gravitational forces on the surface of the sphere, obtaining GM^2/2R^2.
|
https://physics.stackexchange.com
|
2,007,414
|
[
"https://math.stackexchange.com/questions/2007414",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/270702/"
] |
<strong>Problem:</strong>
Let $V$ be a finite-dimensional vector space over a field $K$ and let $T$ be an endomorphism on V. Show that, for the corresponding structure of a $K[x]$-module, $V$ admits a basis if and only if $V = ${$0$}.
<strong><em>Thoughts:</em></strong>
This amounts to showing that no generating set of a $K[x]$ module on V can be linearly independent but I am not sure how to proceed. Hints and insights appreciated.
|
If <span class="math-container">$V$</span> is finitely dimensional say of dimension <span class="math-container">$n$</span>, for every <span class="math-container">$v\in V$</span> the set
<span class="math-container">$$\{v,Tv,\ldots,T^nv\}$$</span>
is linearly dependent over <span class="math-container">$k$</span>. This means every singleton <span class="math-container">$\{v\}$</span> is <span class="math-container">$k[X]$</span>-linearly dependent, so <span class="math-container">$V$</span> has no <span class="math-container">$k[X]$</span>-basis as soon as it is nontrivial. We have shown, in fact, that <span class="math-container">$V$</span> is a <em>torsion</em> <span class="math-container">$k[X]$</span>-module, and free modules are <em>torsion-free</em>.
|
This is a module over $k[x]$, a PID. A module structure on $V$ is determined once we know what $x.v$ is for all $v\in V$. This $x$ action has to be an endomorphism of $V$ as a $k$-vector space; assume $V$ has dimension at least 1 (but finite); fixing a basis of $V$ let us call that endomorphism one given by a matrix $A$.
One can find $f(x)$ such that $f(x).v$ is the zero vector. That $f(A).v=0$ for all $v\in V$. By Cayley-Hamilton theorem when $f$ is chosen as the characteristic polynomial of $A$, we get $f(A).v\equiv 0$ in $V$
|
https://math.stackexchange.com
|
700,726
|
[
"https://physics.stackexchange.com/questions/700726",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/323231/"
] |
We know that a defining property of the metric tensor is that it is non-degenerate, meaning <span class="math-container">$\forall u,\, g(v,u)=0\implies v=0$</span>. Yet from a textbook I read that <span class="math-container">$g(v,v)=0$</span> does not assure <span class="math-container">$v=0$</span>. Why is this? Can't we simply let <span class="math-container">$v=u$</span> in the definition and obtain <span class="math-container">$g(v,v)=0\implies v=0$</span>?
Thanks.
|
I think this is a question of logic:
Suppose <span class="math-container">$$g(v,v)= 0 \Longrightarrow v=0 \tag{1} $$</span> holds. Then we can conclude
<span class="math-container">$$\forall u:\quad g(u,v)=0 \Longrightarrow v=0 \quad, \tag{2}$$</span>
by choosing <span class="math-container">$u=v$</span>. However, the converse must not be necessarily true: Even if <span class="math-container">$(2)$</span> holds, we cannot conclude from <span class="math-container">$g(v,v)=0$</span> <em>alone</em> that <span class="math-container">$v=0$</span> - the condition in <span class="math-container">$(2)$</span> must hold for all <span class="math-container">$u$</span>.
So while <span class="math-container">$(2)$</span> follows from <span class="math-container">$(1)$</span>, the converse is in general not true. Counter examples are provided in the other answers.
|
On Lorentzian manifolds there is an obvious counter example to your claim, namely null-vectors. Let
<span class="math-container">$$g = \begin{pmatrix}-1 & 0 \\ 0 & 1\end{pmatrix}$$</span>
be the Minkowski metric in 2D. Consider
<span class="math-container">$$v= \begin{pmatrix}-1 \\ 1 \end{pmatrix}$$</span>
We see that <span class="math-container">$g(v,v)=0$</span> although <span class="math-container">$v \neq 0$</span>. Therefore, your implication is false.
|
https://physics.stackexchange.com
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.