Wikipedia:Reference desk/Mathematics

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:

July 23

edit

Abel's reciprocity relation

edit

Do we have an article that covers Abel's reciprocity relation for differentials of the third kind? Tito Omburo (talk) 18:37, 23 July 2024 (UTC)Reply

Google Books and Google Scholar searches for "Abel's reciprocity relation" do not yield any results,[1][2] so is this perhaps better known under a different name?  --Lambiam 21:22, 23 July 2024 (UTC)Reply
I've seen it called the "first reciprocity law", but that's a bit non-specific. Tito Omburo (talk) 22:31, 23 July 2024 (UTC)Reply
In this book I find a treatment of "The Reciprocity Theorem" for "Abelian differentials of the second and third kind", and in this one one of what is called "the reciprocity law for differentials of the first and third kinds". I'm only vaguely familiar with the first principles of differential geometry; the limited preview afforded by Google Books does not allow me to understand the notation and see how these two theorems, which superficially look rather different, are related. Both are apparently related to "Abel's theorem" on sums of integrals of certain types of function. We appear to treat only a very simple application of the latter theorem under the name "Abel's identity", so I am afraid the answer to your question is negative.  --Lambiam 09:29, 24 July 2024 (UTC)Reply

Type of curvature?

edit

Where   (polar radius of curvature {RoC}) and   is the normal RoC, what is   (especially the vertex latitude,  )? Since it involves   it would suggest a type of curvature, rather than RoC—?
Likewise with geocentric latitude,  , there is   (with   being the geocentric radius).
This is based on   (so, technically, isn't N the geographic prime vertical RoC and R, besides being the geocentric radius, is also the geocentric prime vertical RoC?). -- Kaimbridge (talk) 18:43, 23 July 2024 (UTC)Reply

The ratio   can be rewritten as   where   is the eccentricity of the ellipsoid. Eccentricity is a dimensionless quantity, also when the radii   and   are expressed using units of length such as kilometres or miles, and so is this ratio. At the poles ( ) it equals   and at the equator ( ) it equals   If the radii are dimensioned lengths, then so is curvature; its dimension is then the inverse of length. Therefore there is no plausible way to interpret this dimensionless ratio as a form of curvature.  --Lambiam 21:05, 23 July 2024 (UTC)Reply
But I thought the root definition of curvature is that its radius is its inverse, and anything else, here b', is just a modifier.
What about the idea of   and   being different prime vertical RoC (in the case of the foundational, parametric   latitude, the prime vertical RoC is just  )? -- Kaimbridge (talk) 03:57, 24 July 2024 (UTC)Reply
I do not understand what you are asking. A geometric interpretation of these quantities? Indeed, the radius of curvature is the inverse of the curvature; my point was that if one is dimensioned, so is the other, while the ratio about which the question appeared to be is inherently dimensionless, so it cannot be some type of curvature. I also do not understand what you mean by "just a modifier". It is the prime-vertical radius of curvature at the pole, which you proceed to divide by that at at geodetic latitude    --Lambiam 08:17, 24 July 2024 (UTC)Reply
By "just a modifier", I mean   prime-vertical curvature, so b' is a modifier of that curvature.
Okay, so what about   in this situation? Would   in   be considered the geocentric prime vertical RoC?
In terms of purpose/context,   and   are used (as vertex latitudes   and  ) in the geographic and geocentric (respectively) calculation of geodetic distance (rather than the usual   based, parametric calculation, which has a neutral value,  ), all using the same, iteratively found, geodetically adjusted longitude difference ( ). -- Kaimbridge (talk) 06:54, 25 July 2024 (UTC)Reply
A radius of curvature at a given spot is the radius of an osculating circle. It can be viewed as a vector from the centre of that circle to the given spot. If the direction of this RoC, viewed as a vector, is the vertical direction, it is a vertical RoC. (If, moreover, the plane of the osculating circle is perpendicular to the meridian through the given spot, so it intersects the ellipsoid along the east–west direction, it is the local prime-vertical RoC.) Unless the ellipsoid is a sphere, the geocentric radius at any other spot than the poles or equator, viewed as a vector from the centre to that spot, is not in the vertical direction, so it is not the radius of a locally osculating circle and it is not particularly meaningful to interpret it as either vertical or as a RoC, let alone both.  --Lambiam 10:41, 25 July 2024 (UTC)Reply


July 26

edit

I joined the X and Y axes together. What Wikipedia page already mentions the concept?

edit

I had / have this fairly rather long blabbersome brilliant idea. My question is since there is no way that I, with a rather low IQ, could come up with something "new", then my idea certainly must be merely a rehash of some ideas already mentioned on several Wikipedia pages, in fact probably just a sentence on just one page. But which one(s)? Thanks. Jidanni (talk) 04:56, 26 July 2024 (UTC)Reply

Your page is cumbersome to follow, but if I'm correct in interpreting it, you are essentially proposing the use of a pairing function or Hilbert curve. It is not possible to continuously reduce dimension in this manner (more precisely, 1D and 2D space are not homeomorphic). It would help if you would more rigorously formulate the function you are proposing rather than merely using examples.--Jasper Deng (talk) 06:22, 26 July 2024 (UTC)Reply
Wikipedia defines "pairing function" only for natural numbers, but below I use the term for (not necessarily unique) functions wrapping up two values (not necessarily integers) into a single one of the same type.
Letting   stand for the unit interval   the Hilbert curve can be described as a function   Input one number   output   a pair of numbers. This function is surjective, which implies that there exists an inverse pairing function   Being an inverse means that when   we have   Function   is also continuous, but it is not injective. Many output pairs are reached several times; for example,   So the inverse is not unique.
Numbers in   can be written in base 2; for example,     and   This expansion is not unique:   We can view these binary expansions as infinite sequences   The function   given by   interprets a binary expansion as a real number. This function   is continuous and surjective, just like function   before, so it has an inverse. But, as before, function   is not injective, so the inverse is not unique. However, by convention, it has a "canonical" inverse: other than   the only possible expansion of   in the domain of   avoid sequences ending in an infinite stream of  s.
Now, using   we can define a bicontinuous pairing function   such that
 
This means that we can give a "canonical" definition for   by using   and its canonical inverse:
 
The function   can be described in the form of a 4-state finite-state automaton that gobbles up two streams of bits and produces a single stream of bits. It takes two bits at a time, one from each of the two input streams, and outputs two bits on the output stream.
I suspect that the "brilliant idea" is akin to this way of pairing   and   I expect the idea is well known, but perhaps only as folklore, and I doubt that it is described or even hinted at in Wikipedia mainspace.  --Lambiam, edited 10:51, 28 July 2024 (UTC) (originall 11:11, 26 July 2024 (UTC))Reply

July 27

edit

Data estimation with excessive log functions

edit

In health care, I noticed that many estimation algorithms make extensive use of log functions. For example, the ASCVD 10-year risk estimation from "2013 ACC/AHA Guideline on the Assessment of Cardiovascular Risk" sums up a coefficient times the log of age, a coefficient times the log of total cholesterol, a coefficient times the log of HDL, etc... It is a set of coefficients, each multiplied by the log of an attribute. Is this type of function or algorithm the result of a specific type of data modeling? It looks to me like they took a sample data set and correlated the log of each attribute, one at a time, to the outcome and produced a coefficient that represents how correlated the log of that attribute is in the sample set. But, I'm just guessing and I'd prefer to know how this type of function is actually produced. 75.136.148.8 (talk) 10:54, 27 July 2024 (UTC)Reply

I'm not familiar with how this estimator was devised, but model building is an art, especially in cases where the data is noisy and the causal processes are poorly understood. Social scientists routinely use purely linear regression models, because that is what they were taught as students, it is the default model of R, which many use, and everyone else in their field does this. When a variable (independent or dependent) can only assume positive values, it cannot have a normal distribution. This is an indication that pure linear regression may not be the best approach when devising an estimator. So then it is good practice to use a data transformation that makes the observed distribution more normal. I don't know if this is why they did what they did. Another possibility is that they just computed the correlation coefficients and saw they were higher when using a logarithmic scale.  --Lambiam 11:52, 27 July 2024 (UTC)Reply

Are there other triangular numbers with all digits 6?

edit

6, 66, 666 are all triangular numbers, are there other triangular numbers with all digits 6? 218.187.67.217 (talk) 16:42, 27 July 2024 (UTC)Reply

These correspond to solutions of the Diophantine equation  
 
 
 
For each solution, the number   is an all-6 triangular number.
I don't expect any further solutions, but neither do I see an argument exhibiting that they cannot exist. The weaker requirement   has four solutions for   for each given value of   corresponding to the final digits   For example, for   they are   The polynomials in   in the rhs of the Diophantine equation are irreducible. It seems that considerations based on modular arithmetic are not going to give further help.  --Lambiam 19:59, 27 July 2024 (UTC)Reply
The discriminant of the quadratic is  . This needs to be a perfect square for there to be a solution, so we need   for some integer k. Since   will get "closer" to being an even perfect square as p approaches infinity, I heuristically wouldn't expect more than a finite amount of solutions to exist.--Jasper Deng (talk) 03:34, 28 July 2024 (UTC)Reply
This gives yet another way of phrasing the problem. Define the recurrent sequence   by:
 
It goes like this:
 
The first four values are squares. Will the sequence ever hit another square?  --Lambiam 10:05, 28 July 2024 (UTC)Reply
It turns out that because the discriminant is added or subtracted to 3 and then divided by 2a=24 in the quadratic formula, there are even more stringent restrictions: the numerator has to be divisible by 24, so we must have   and thus  . That restriction alone would seem to greatly reduce the amount of candidates (only every other odd perfect square satisfies that).--Jasper Deng (talk) 04:49, 29 July 2024 (UTC)Reply
If the sequence   ever hits another square   its square root   will satisfy this requirement. This can be seen as follows. For   since   and         The only residue classes for   modulo   that have   are   in all four cases,    --Lambiam 10:13, 29 July 2024 (UTC)Reply
Right. For any modulus m you can use the recursion to easily compute ap mod m. It's a bit harder, but still possible to then determine if ap is a quadratic residue mod m. If it isn't then you can eliminate that ap as a non-square. Do this for a few thousand prime (or prime power) values of m and you have a sieve which only let's though those ap's that are square and a vanishingly small number of "false positives". (There are going to be some m where all the values of ap are quadratic residues, but this won't happen if 10 is a primitive root mod m, and this occurs at a relatively constant rate.) This could be implemented in Python (or whatever) fairly easily to eliminate all the non-square ap's up to some value, say p≤10000. Keep in mind that a10000 would have around 10000 digits, but there's no need for multiprecision arithmetic to carry this out. However, all you would be doing is creating a lower bound on the next highest square ap, you wouldn't actually be proving there are none. (That's assuming the sieve didn't produce an actual square ap with p≤10000.) It shouldn't be hard to use a probabilistic argument to show that the "expected" number of squares is finite, but this wouldn't be a proof but rather an indication that it's unlikely that there will be additional squares above a given bound. In any case, I couldn't think of anything that would answer the original question better than a somewhat wishy-washy "probably not". --RDBury (talk) 13:10, 29 July 2024 (UTC)Reply


July 30

edit

Axiom of choice, axiom of countable choice, any others?

edit

We have the Axiom of countable choice, which is weaker than Axiom of choice, so is it possible, meaningful or even already done to have an Axiom of aleph 1 choice or Axiom of cardinality of continuum choice, weaker than the Axiom of choice? Also, could the Axiom of dependent choice actually be such an axiom, corresponding to some particular aleph?Rich (talk) 20:20, 30 July 2024 (UTC)Reply

The notion of cardinality is a fragile one without AC, but you can certainly define the axiom "If   is a sequence of nonempty sets, the product is nonempty", and that could be reasonably thought of as   choice. That's stronger than dependent choice (DC), since it's enough to construct Aronszajn trees, and you can arrange that the Solovay model satisfies DC + no Aronszajn trees. I'm confident it's weaker than full choice, but I don't know enough about symmetric extensions to show it.--Antendren (talk) 06:40, 31 July 2024 (UTC)Reply
Interesting. it makes me wonder if a hierarchy of AC axioms as adjuncts to ZF, akin to what i've read about a hierarchy of Large cardinal axioms added to ZFC could be discovered.Rich (talk) 02:29, 1 August 2024 (UTC)Reply
Even for finite cardinals this is non-trivial; for example, AC2 constructively implies AC4. (Andrzej Mostowski. "Axiom of choice for finite sets". Fundamenta mathematicae, vol. 33 (1945), pp. 137–168.)  --Lambiam 08:29, 1 August 2024 (UTC)Reply
Going in the other direction, the Axiom of global choice is stronger than the Axiom of choice. This axiom is used Bourbaki's Theory of Sets. I think these weaker forms of the AoC are efforts to find a "Goldilocks zone" where you can do standard real analysis but don't have to be bothered by pesky apparent paradoxes like non-measurable sets. But I don't know if the resulting plethora of different axioms is more helpful than confusing. --RDBury (talk) 15:46, 1 August 2024 (UTC)Reply


August 2

edit

Conditional probability (please check my math)

edit

As measured by recent Polymarket betting odds, Kamala Harris has 45% chance of winning the POTUS election (electoral vote)[3] and 70% chance of winning the popular vote.[4] Let's ignore edge cases like 3rd party candidates and electoral ties. Also, because of how EV's are distributed between red and blue states, while it's theoretically possible for Trump to win the PV and lose the EV, it's unlikely in practice so let's assign that a probability of 0 (polymarket puts it around 1%). So the event "Harris wins EV" is a subset of "Harris wins PV".

Let HP = the event "Harris wins PV", HE = Harris wins EV, TP=Trump wins PV, TE=Trump wins electoral.

So (abusing notation) HP=70% is split into 45% inside HE and 25% in TE. TE itself is 55%.

Does this mean that Pr[HP|TE], the probability of Harris winning the popular vote conditional on Trump winning the electoral vote, is .25/.55 = about .45? Does that sound realistic in terms of recent US politics? I guess we saw it happen in 2016 and in 2008. Not sure about 2012. Obviously whether this possibility is a good one or bad one is subjective and political and is not being asked here. The .45 probability is higher than I would have guessed.

2602:243:2008:8BB0:F494:276C:D59A:C992 (talk) 03:21, 2 August 2024 (UTC)Reply

Your calculation matches mine for the first question in terms of the conditional probability and assuming the assumptions. Whether it's realistic or not is not a math question. Note that Polymarket gives estimated probabilities for all combinations here with P(HP & HE) = .44, P(TP & TE) = .29, P(HP & TE) = .28 and P(TP & HE) = .01, matching your values to within a few percent. I've been following PredictIt myself, which currently gives Harris a 4 point lead over Trump, so there seems to be a fairly large margin of error in these internet odds sites. --RDBury (talk) 04:43, 2 August 2024 (UTC)Reply
Thanks. Can I ask how it is possible that different betting sites can have different odds for the exact same event? Does that create arbitrage potential that should get used immediately? I noticed yesterday that the Iowa Electronic Markets gave a very high probability (like 70%+) of Harris winning the PV and that surprised me, and I wondered why they made no market (as far as I saw) for the EV. I didn't notice til today that Polymarkets also gave around 70% for Harris wins PV (not sure if it is the exact same conditions as IEM)'s. I'm quite surprised by all of this, especially the PV prediction. For the EV, news outlets have been bipolar for weeks (about Biden and more recently Harris) while the prediction markets stayed fairly serene. Now they're swinging more towards Harris and IDK whether anything has substantively changed. 2602:243:2008:8BB0:F494:276C:D59A:C992 (talk) 04:57, 2 August 2024 (UTC)Reply
Suppose two betting markets have substantively different odds for the same event, say   and   expressed as probabilities in the unit interval   with   Given the way bookmakers calculate the pay-off, which guarantees them a margin of profit, betting in one market on the event occurring (at  ) and at the same time in the other market on it not occurring (at  ) can only give you a guaranteed net win if    --Lambiam 08:22, 2 August 2024 (UTC)Reply
I think you just need p < q (depending on how you distribute the bets) but the difference has to be enough that the profit is more than you'd get just putting your money in the bank and collecting interest. Plus I think that there's a nominal transaction fee with these things. I'm pretty sure it reduces to a minimax problem; find the proportion of bets on E on site a to bets on ~E on site b to maximize the minimum amount you'd win. But I'm also sure plenty of other people have already worked this out and are applying it to equalize the markets. RDBury (talk) 16:46, 2 August 2024 (UTC)Reply
PS. The technical name for this kind of thing is Arbitrage, but it's a bit more complicated than with the usual case with stocks and whatnot. In normal arbitrage you buy X in market A and sell X in market B at a higher amount. In this case we're buying X in market A and not X in market B, then wait until X has been resolved to true or false, which will be months from now. Another factor is that the X in one market may not be exactly the same as the X in the other market, so you have to read the details on each site. For example one site may say "Harris wins" while the other site says "Democrats win". If you don't think it makes a difference then you're not accounting for black swan type events like the presumptive candidate suddenly dropping out of the race. --RDBury (talk) 17:03, 2 August 2024 (UTC)Reply

Probability question

edit

Can someone please help me with this probability question? Say I have 10,000 songs on my iPod, 50 of which are by the Beatles. The iPod can play a sequence of songs at random. Assuming no songs are repeated, what is the probability of two Beatles songs being played consecutively? Thanks, --Viennese Waltz 18:03, 2 August 2024 (UTC)Reply

Just to clarify, do you mean the probability that, if you choose two songs at random, then both will be Beatles songs, or the probability that, if you play all 10,000 songs at random, there will be two consecutive Beatles songs somewhere in the shuffled list? GalacticShoe (talk) 18:11, 2 August 2024 (UTC)Reply
I thought this was a simple question that would not require further elucidation, but apparently not. I don't understand your request for clarification, I'm afraid. Imagine I turn the iPod on and start playing songs at random. What is the probability that two Beatles songs will appear consecutively? I can't be any more specific than that, I'm afraid. --Viennese Waltz 18:14, 2 August 2024 (UTC)Reply
How many songs are you playing total? Are you playing possibly all 10,000, until you hit two consecutive Beatles songs? GalacticShoe (talk) 18:20, 2 August 2024 (UTC)Reply
I have no idea how many I'm playing total. I don't see it as a finite activity. I just want to know the probability that two will pop up consecutively. If the question is unanswerable in this form, please let me know! --Viennese Waltz 18:22, 2 August 2024 (UTC)Reply
Well the problem is that you only have a finite number of songs, so it has to be a finite activity unless you define some behavior for your playlist looping. For example, if the first 10,000 songs are exhausted, then do you reshuffle and play all 10,000 songs again? If that's the case then the probability that you eventually hit two consecutive Beatles songs is essentially 100%. GalacticShoe (talk) 18:25, 2 August 2024 (UTC)Reply
Even if you did manage to get a well-posed version of the problem, I doubt there would be a simple formula. In the ball & urn model, you have m balls in an urn, k of which are colored, and you're selecting n balls without replacement, what is the probability that two colored balls in a row will be chosen? There are well-known formulas for the probability that l of the selected balls will be colored, but they don't say anything about the order in which they appear. Part of the problem may be that "no songs are repeated" is vague. Does it mean a no song is repeated twice in a row or that once a song is played it will never be played again. I think most people here would assume the second meaning, which would imply that the sequence would have to end after all the songs have been played. If it's the first meaning then the sequence could continue forever, but in that case the probability of two consecutive Beatles songs is 1. --RDBury (talk) 19:00, 2 August 2024 (UTC)Reply
Assuming that these are independent events: the odds of a random Beatles song playing is 50:10,000 or 1:200. The consecutive odds is 1:200x200 or 1:40,000. The next step is to subtract the odds they are the same... Modocc (talk) 19:18, 2 August 2024 (UTC)Reply
The odds the same song plays consecutively is 1:10,000x10,000, which is many orders smaller than any two of them. Modocc (talk) 19:31, 2 August 2024 (UTC)Reply
See Probability#Independent_events. Modocc (talk) 19:57, 2 August 2024 (UTC)Reply
The events aren't independent, because they are drawn without replacement. Tito Omburo (talk) 20:04, 2 August 2024 (UTC)Reply
My take is the apps' songs actually repeat (randomly) and from my experience it seems they do, but the OP simply excluded it (as in that doesn't count). Modocc (talk) 20:13, 2 August 2024 (UTC)Reply
Let's assume instead that the app does not repeat any, but plays all of them. The app selects a Beatles song with exactly the same odds initially, and puts every song played into a dustbin. Another similar example should help. Take 10,000 cards with lyrics and randomly assign them to [deal one card to each of the] 10,000 players. With 10,000 non-repetitious events and 50 prized Beatles cards each player has a one in 200 chance of winning a prize card. The chances that any two players (consecutive or not) are assigned these prizes is slightly greater than[differ from] 1:40,000 though because they are no longer independent. Modocc (talk) 22:07, 2 August 2024 (UTC)Reply
I'm not sure how this model relates to the original question. Take the case of an iPod with 3 songs, 2 of which are by the Fab Four. After the first of the 2 Beatle cards has been assigned randomly to one of 3 players, the probability that the player who is randomly selected to receive the second card happens to be the same person is   Among the possible shuffles of the songs, only those with the non-Beatle song in the middle have no adjacent Beatle songs. The probability of this song randomly being assigned to the middle is   so the likelihood of a random shuffle producing adjacent Beatle songs equals   much higher than the card model suggests.  --Lambiam 14:00, 3 August 2024 (UTC)Reply
I meant one card. It's been 40 years since I aced stats, I'm rusty and I've forgotten some of it. Modocc (talk) 14:46, 3 August 2024 (UTC)Reply
With three independent deals of one card each: 2:3 to win, 2/3x2/3 is 4/9 per pair which is too low. I aced linear algebra too, honors and all that, but I cannot seem to do any of the maths now, but I think the inaccuracy gets smaller with a larger deck because only two consecutive plays are involved.
Modocc (talk) 16:05, 3 August 2024 (UTC)Reply
(ec) One way of interpreting the question is this: given an urn with 10,000 marbles, 9,950 of which are white and 50 of which are black, drawing one marble at a time from the urn, without replacement but putting them neatly in a row in the order in which they were drawn, what is the probability that the final sequence of 10,000 marbles contains somewhere two adjacent marbles that are both black. Generalizing it and taking a combinatorial view, let   stand for the number of permutations   of the numbers   in which no two adjacent elements   and   are both at most   where   The answer is then   We have   but I don't have a general formula. I suppose, though, that the computation can be made tractable using a (mutual) recurrence relation.  --Lambiam 22:08, 2 August 2024 (UTC)Reply
Experimentally, repeating the process 10,000 times, 2130 runs had adjacent black marbles. So   should be in the order of    --Lambiam 22:53, 2 August 2024 (UTC)Reply

Let n be the number of songs in the playlist, and k the number of beatles songs. Assuming that every song plays exactly once, we count the number of permutations having the property that two beatles songs are played consecutively. First, note that the number of all such configurations is n!, which we write as  . This can be interpreted as follows: From n songs of the playlist, we first select k positions in   ways, into which we insert the k beatles songs in k! ways; then the remaining n-k songs are inserted into the remaining n-k slots in (n-k)! ways. We modify this by replacing the binomial coefficient by the quantity  , whose definition is the number of ways of selecting k among n objects, at least two of which are adjacent. Now, we have

 

where   is the number of ways of selecting k from the n objects such that none of the k selected are adjacent. We then have  . Now the desired probability is just

 

When n=10000 and k=50, we get a probability of about 0.21824. Tito Omburo (talk) 16:40, 3 August 2024 (UTC)Reply

The experimentally observed number of 2130 runs out of 10,000 is within 1.3 sigma of the expected value 2182.4, so this is a good agreement.  --Lambiam 19:38, 3 August 2024 (UTC)Reply
Restated as I understand the OP: "what is the probability that two songs played in succession are different Beatles songs?
I assume that two songs is the minimum they listen to in one session and the odds are small and my mouse iis malfuctioninggg too... Modocc (talk) 16:57, 3 August 2024 (UTC)Reply
The probability is 0.21824 when the entire playlist of 10,000 songs is listen to. There are 5000 (even) + 4999 (even shifted one is odd) consecutive pairs. Supposing their expectation values are equal, that is a probability of about 0.21824/9999 or 2.183e-5 or about 1/45,076. The values are converging then as I thought they would. Modocc (talk) 18:30, 3 August 2024 (UTC)Reply
One can add to the model a variable s, which determines the length of a session. Assuming as before that no song is played more than once, we first insert all songs into a playlist in n! ways. Let S_i denote the event that the number of beatles songs in the first s songs is i. Then, for i from 0 to k, these events partition the probability space. We have   Now, the conditional probability of two consecutive beatles songs is   So we get
 
For example, when n=10000, k=50, and s=10000, we get 0.21824; whereas if s=100 (we only listen to 100 songs), we have probability of consecutive beatles songs 0.00241166. Tito Omburo (talk) 20:26, 3 August 2024 (UTC)Reply
For clarity's sake, it's worth noting that when s=n, this simplifies to  . GalacticShoe (talk) 20:35, 3 August 2024 (UTC)Reply

Given 2 finite field elements, is it possible to enlarge or decrease the characteristic at the cost of the dimension ?

edit

Simple question : given 2 finite fields elements, is it possible to compute 2 other equivalent elements having a larger or lower characteristics while keeping the same old discrete logarithm between the 2 new elements (through enlarging or decreasing the dimension and without computing the discrete logarithm) ? 82.66.26.199 (talk) 21:48, 2 August 2024 (UTC)Reply

The algebraic notion of characteristic is usually defined for rings (and therefore also for fields). How do you define it for elements of a field?  --Lambiam 08:45, 3 August 2024 (UTC)Reply
Let’s say I have 2 finite fields elements having a discrete logarithm s between them. My aim is to redefine 2 equivalent elements having a larger prime but a lower dimension (with keeping the same discrete logarithm’s) 82.66.26.199 (talk) 10:02, 3 August 2024 (UTC)Reply
If by "prime" you mean the prime number   of the Galois field   there is no way to relate its elements algebraically to those of a Galois field with another prime. If you remain in the same field, there is a fair chance that   but the left-hand side may have more solutions in   of the equation   than the right-hand side has for the case    --Lambiam 12:58, 3 August 2024 (UTC)Reply
Correct, I was talking about   where   is a 64 bits large prime and   as a power is even larger. Isn’t it possible to decrease k by increasing p while keeping the same previous m between the 2 equivalent finite field elements ? (without knowing what the discrete logarithm  ) 2A01:E0A:401:A7C0:9CB:33F3:E8EB:8A5D (talk) 13:35, 3 August 2024 (UTC)Reply
I don't think there is any hope. The algebraic structures of the fields are completely different, so there are no homomorphisms between them.  --Lambiam 14:14, 3 August 2024 (UTC)Reply
Stupid begginer question. Why is pohlig Hellman varients sometimes applicable to elliptic curves but never on Finite fields having a composite dimension ? 2A01:E0A:401:A7C0:F4D5:FA63:12A5:B6B6 (talk) 16:46, 3 August 2024 (UTC)Reply
Pohlig--Hellman is applicable in any finite cyclic group such that the prime factorization of the order is known. It is possible to modify Pohlig--Hellman to any finite abelian group (under the same assumption), but if the rank of the abelian group is greater than one, you need several basis elements rather than just one generator for the discrete log. Notably, the multiplicative group of a finite field is always cyclic, so assuming you know how to factor  , you can use regular PH here. For the group of units in a finite ring, you typically need more information. For example, in the units modulo n, where n is composite, you need to be able to compute phi(n) (equivalent to the prime factorization of n), and phi(phi(n)). To answer your original question, if you know that a=b^x (mod p), and have another prime p', just let b' be a primitive root mod p', and define a' as b'^x (mod p'). Tito Omburo (talk) 17:18, 3 August 2024 (UTC)Reply
As I said, I m talking about finite fields of composite dimensions. This excludes  . As far I understand Pohlig Hellman variants don’t apply between elements of   unlike ecdlp but why ? 2A01:E0A:401:A7C0:F4D5:FA63:12A5:B6B6 (talk) 19:20, 3 August 2024 (UTC)Reply
It applies if you can determine the prime factorization of p^n-1. The multiplicative group of a finite field is cyclic. Tito Omburo (talk) 20:32, 3 August 2024 (UTC)Reply
It seems to me your are confusing finite fields and finite rings. Finite fields elements of non 1 dimensions are expressed as polynomials like 848848848848487489219*a^4+7378478947844783*a^3+43445998848848898*a^2+87837838383837*a+637837871093836 2A01:E0A:401:A7C0:F4D5:FA63:12A5:B6B6 (talk) 21:46, 3 August 2024 (UTC)Reply
No, I'm not. Any finite (multiplicative) subgroup of *any* field is cyclic. Using PH only requires knowing the prime factorization of the order of the group. In the case of a field with p^n elements, the group of units is cyclic of order is p^n-1. (Of course this may not have many small prime factors, e.g., if n=1 and p is a safe prime). Tito Omburo (talk) 22:37, 3 August 2024 (UTC)Reply
Ok but then how to use the resulting factors on a larger polynomial ? 2A01:E0A:401:A7C0:F4D5:FA63:12A5:B6B6 (talk) 00:12, 4 August 2024 (UTC)Reply

The details of how you do multiplication in the field are irrelevant. PH works if the group multiplication is just considered as a black box. Tito Omburo (talk) 12:13, 4 August 2024 (UTC)Reply

No, I’m meaning once I’ve factored the order, how do I shrink the 2 finite field elements to the lower prime factor in order to solve the ꜰꜰᴅʟᴘ in the lower factor ? For example, in the ᴇᴄᴅʟᴘ this is the matter of dividing each point by the prime factor… Especially if the base prime   is lower than the dimension/power  .
Though I admit I would like sample code/pseudocode to understand at that point… 82.66.26.199 (talk) 15:58, 4 August 2024 (UTC)Reply
Suppose you want to solve   where   is a generator of the group of units. If you have the unique factorization into prime powers  , then with  , the element   generates the group   of  -th roots of unity in  . Then one uses the prime power case of PH to solve   in  . This can be achieved efficiently (provided   is not a large prime) using the Pohlig–Hellman algorithm#Groups of prime-power order. (Basically, you start with   and successively remove powers of   inductively. This is a search over   possibilities, so polynomial time in k_i for fixed p_i.) So, we get a solution exponent   in each group  . Finally, let   be an integral combination, and   is the solution. Tito Omburo (talk) 17:25, 4 August 2024 (UTC)Reply
So let’s say I want to solve :
 
 
in  
How would those 2 elements be written (what does the polynomials would like) in order to solve the subdiscrete log in 2801 ? 82.66.26.199 (talk) 22:28, 4 August 2024 (UTC)Reply
First raise them both to the power (7^10-1)/2801 in the field. And then check which among 2801 possibilities you have g^x=y. Tito Omburo (talk) 22:37, 4 August 2024 (UTC)Reply
So still 10 polynomials elements per finite fields for just 2801 ? 2A01:E0A:401:A7C0:F4D5:FA63:12A5:B6B6 (talk) 01:02, 5 August 2024 (UTC)Reply

August 4

edit

Even binomial coefficients

edit

Is there an elementary way to show that all the entries except the outer two, on a row of Pascal's triangle corresponding to a power of 2, are even? 2A00:23C6:AA0D:F501:658A:3BC6:7F9D:2A4A (talk) 17:37, 4 August 2024 (UTC)Reply

The entries in Pascal's triangle are the coefficients of anbn-i in (a+b)n. It's easy to see that (a+b)2=(a2+b2) mod 2, and consequently (a+b)4=(a4+b4) mod 2, (a+b)8=(a8+b8) mod 2, and in general (a+b)n=(an+bn) mod 2 if n is a power of 2. This implies that all the coefficients of anbn-i are 0 mod 2 except when i=0 or i=n. Note, there are more complex formulas for finding anbn-i mod 2 or any other prime for any values of n and i. --RDBury (talk) 18:40, 4 August 2024 (UTC)Reply
@RDBury: Did you mean aibn-i in the first sentence...? --CiaPan (talk) 09:44, 5 August 2024 (UTC)Reply
Yes, good catch. RDBury (talk) 13:20, 5 August 2024 (UTC)Reply
Another way to see it is, we divide   into two equal piles of size  , so that to select k things from   is to select i things from one pile and k-i things from the other. That is, we have the following special case of Vandermonde's identity:
 
In the right-hand side, every term appears twice, except the middle term (if k is even). We thus have
 
We iterate this until k is either odd (and the right hand side is even by induction), or   in which case   or  , which corresponds one of the two outer binomial coefficients. Tito Omburo (talk) 18:51, 4 August 2024 (UTC)Reply
(OP) I think the first approach more my level, but thanks to both for the replies.2A00:23C6:AA0D:F501:6CF2:A683:CAFC:3D8 (talk) 07:10, 5 August 2024 (UTC)Reply


August 6

edit