Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
A riddle wrapped in a mystery inside a rich, flaky crust

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.

user:
pass:
register,


                                                                                                                   

Please log in.
Before you can vote, you need to register. Please log in or create an account.

(A+xB)/C = y

An algorithm for integers only
  (+2, -1)
(+2, -1)
  [vote for,
against]

Before I spend a bunch of time writing the rest of this Idea down, I'm going to ask if there already is such an algorithm out there, to generically and efficiently find a place where an integer summation series such as A+B+B+B+... becomes exactly divisible by some other integer C.

One simple approach is to get two calculators and put both of them into "constant" mode, such that you can get the next result of A+B+B+B+... and C+C+C+... simply by pressing the "=" keys --and then you compare the readouts until you find a match. If B and C are large numbers, though, that approach is very far from being efficient. (After you find one such place, all other places are equal to that place, plus-or-minus various multiples of the quantity [(B*C)/(all common factors)].)

I'm aware that many combinations are impossible, such as 5+2+2+2+... will never be exactly divisible by 14, for example, but my question is about an algorithm for efficiently solving the problem when it is actually solvable (quite often, that is).

If it already exists, I don't know what the algorithm is called or how widely known it might be. If it isn't widely known, then I'll probably finish this, depending on the annotations. ("Useless" won't sway me, because I've found myself needing such an algorithm often enough over the years that I finally made myself sit down and figure one out.)

---------(a couple days later)----------
(deleted)

---------------(posted Sept 29)-----------
(Some computer code was posted above that was faulty. Certain combinations of values either were not processed correctly, or the program would go into an endless loop. After three weeks of twiddling I stumbled upon something that seems to work excellently. I am deleting the originally posted code, therefore. I suspect the new code below is now rather close to the algorithm described at the "Linear Congruence Theorem" link, but I haven't studied that yet to be sure, because I wanted to be at least independently original with this. That probably also explains why it took me 3 weeks of twiddling to get to this point....)

Here is a QBASIC program for the algorithm. The program is in three parts. The first part declares two subroutine functions, and is simply a set of loops that create numbers to be processed by the algorithm:

DECLARE FUNCTION InterSeries& (A AS LONG, B AS LONG, C AS LONG)
DECLARE FUNCTION GCD& (A AS LONG, B AS LONG)
~~DIM r AS LONG, f1 AS LONG, n AS LONG, c2 AS LONG
~~CLS

~~FOR n = 1 TO 1000
~~~FOR c2 = 1 TO 1000
~~~~~~FOR r = 1 TO 1000
~~~~~~~~IF ((n MOD GCD&(c2, r)) = 0) THEN
~~~~~~~~~~f1 = InterSeries&(n, c2, r): PRINT n; c2; r, f1
~~~~~~~~~~IF (((f1 MOD r) <> 0) OR (((f1 - n) MOD c2) <> 0)) THEN
~~~~~~~~~~~~END
~~~~~~~~~~ELSE
~~~~~~~~~~~~f1 = InterSeries&(-n, c2, r): PRINT -n; c2; r, f1: PRINT
~~~~~~~~~~~~IF (((f1 MOD r) <> 0) OR (((f1 + n) MOD c2) <> 0)) THEN
~~~~~~~~~~~~~~END
~~~~~~~~~~~~END IF
~~~~~~~~~~END IF
~~~~~~~~END IF
~~~~~~NEXT r
~~~~NEXT c2
~~NEXT n
END

(Notice I'm using "tilde" (~) characters for indenting. You will need to search-and-replace them with spaces, before running this program.)

That first block of code tests the numbers to be sure that only solvable combinations are fed to the algorithm. Each combination is fed to the algorithm twice, because when the first number (A) is negative, the result can be different than when it is positive. Every result is tested to be sure it is a correct result.

The next block of code is a simple function to obtain the Greatest Common Divisor of two numbers. It is a required part of algorithm because it is the best/fastest way to find out if a given set of numbers is solvable.

FUNCTION GCD& (A AS LONG, B AS LONG)
'Goal: find the Greatest Common Divisor of A and B
'The result is always a positive number, except if an error occurs
~~DIM C AS LONG, d AS LONG, e AS LONG
~~C = ABS(A): d = ABS(B)
~~IF ((C > 0) AND (C > 0)) THEN
~~~~IF (C < d) THEN 'prepare to swap data if d is not smaller number
~~~~~~e = C: C = d: d = e
~~~~END IF
~~~~DO
~~~~~~e = C MOD d 'vital for smaller number to be divisor (modulus "base")
~~~~~~C = d: d = e 'prepare for next loop as if e is not Zero
~~~~LOOP UNTIL (e = 0)
~~~~GCD& = C 'this was nonzero MOD divisor in d, divided evenly if e=0
~~ELSE
~~~~IF (C > d) THEN
~~~~~~GCD& = C 'because B is zero and c divides into it
~~~~ELSEIF (C < d) THEN
~~~~~~GCD& = d 'because A is zero and d divides into it
~~~~ELSE
~~~~~~GCD& = 1 'Both are Zero (any other divisor allowed); be harmless!
~~~~END IF
~~END IF
END FUNCTION

Finally, we have the main algorithm function:

FUNCTION InterSeries& (A AS LONG, B AS LONG, C AS LONG)
'Goal: find a place where A+B+B+B+... is exactly divisible by C
~~DIM aa AS LONG, bb AS LONG, cc AS LONG, a2 AS LONG, b2 AS LONG
~~DIM e AS LONG, o AS LONG, z AS LONG, x AS LONG, y AS LONG
~~' Mathematically the initial description can be written [A+(x*B) = y*C]
~~' If there is any solution to a particular group of numbers (A), (B),
~~' and (C), then there will be an infinite number of solutions. So, seek
~~' the value nearest to Zero that satisfies the equality--call it (z).
~~cc = ABS(C): bb = ABS(B) 'work with positives; NOTE: Only the sign
~~' of (A) can lead to significantly different results.
~~IF (A < 0) THEN
~~~~y = (INT(-A / cc) + 1) * cc
~~~~aa = y + A '(aa) is now a positive number of lesser magnitude than (C)
~~ELSE
~~~~y = 0 'not using (y) here as in the DESCRIPTION [A+(x*B) = y*C]
~~~~aa = A 'no need to use the ABS() function here (as done above)
~~END IF
~~'ALSO, copying the parameters because internally they might be changed,
~~' and while QBASIC allows it, we don't want any such changes to be
~~' visible, external to this Function.
~~IF ((aa MOD bb) = 0) THEN
~~~~InterSeries& = 0 - y: EXIT FUNCTION
~~END IF
~~IF (cc = 0) THEN
~~~~PRINT "There is no solution for those values": END
~~END IF
~~IF ((aa MOD cc) = 0) THEN
~~~~InterSeries& = A: EXIT FUNCTION
~~ELSEIF (bb = 0) THEN
~~~~PRINT "There is no solution for those values": END
~~ELSE
~~~~e = GCD&(bb, cc)
~~~~IF ((aa MOD e) > 0) THEN
~~~~~~PRINT "There is no solution for those values": END
~~~~END IF
~~~~IF (e > 1) THEN 'simplify the problem by shrinking the numbers
~~~~~~aa = aa / e: bb = bb / e: cc = cc / e
~~~~END IF
~~~~a2 = aa: b2 = bb 'need to save these values before the next step
~~~~aa = aa MOD cc: bb = bb MOD cc 'now try shrinking the params even more
~~~~o = bb 'need to save this value before the next step
~~~~IF (cc > bb) THEN
~~~~~~'NOTE: [A+(x*B)=y*C] is equivalent to [x*B=(y*C)-A] or [-A+(y*C)=x*B]
~~~~~~' so let the 3rd parameter of the function be the smaller of B or C
~~~~~~bb = cc: cc = o
~~~~END IF
~~~~IF (cc > 1) THEN 'Here is the key to the algorithm: Recursion
~~~~~~z = InterSeries&(aa, bb, cc) '(bb) and (cc) will be swapped and
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~' more moduli will be computed as above
~~~~ELSE 'eventually (cc) will be ONE, at which point the first answer is:
~~~~~~z = aa
~~~~END IF
~~~~IF (o <> bb) THEN
~~~~~~z = -z + aa 'convert solution for (A+xC=yB) to solution for (A+xB=yC)
~~~~~~bb = o 'restore (bb) because might be needed for other cleanup below
~~~~END IF
~~~~IF (a2 > aa) THEN 'undo effect of (aa MOD cc) near start
~~~~~~z = z + a2 - aa
~~~~END IF
~~~~IF (b2 > bb) THEN 'undo effect of (bb MOD cc) near start
~~~~~~x = (z - a2) / bb
~~~~~~z = a2 + (x * b2)
~~~~END IF
~~~~IF (e > 1) THEN
~~~~~~z = z * e 'undo effect of dividing (aa) and (bb) and (cc) by (e)
~~~~END IF
~~~~InterSeries& = z - y 'undo effect of modifying a negative (A)
~~END IF 'Each recursion ends with more cleanup until final answer found.
END FUNCTION

For those who don't know, in BASIC the name of the function is a variable inside the function; when this variable is assigned a value, then the code that called the function will see that value as the result of calling the function. Many other computer languages return a value from a function much more directly, by, for example, this code: return (x);

Vernon, Sep 08 2009

Coin problem http://en.wikipedia.org/wiki/Coin_problem
maybe a place to start your search [xaviergisz, Sep 08 2009]

Linear Congruence Theorem http://en.wikipedia..._congruence_theorem
This has an existence test and an algorithm for solving something similar to this problem. [sninctown, Sep 09 2009]

Bezout's Identity http://en.wikipedia...A9zout%27s_identity
solving Ax+By=d when A and B are nonzero integers and d is greatest common divisor of A and B. [sninctown, Sep 09 2009]

[link]






       Is this a name given to a special beat designed by Al Gore?
xenzag, Sep 08 2009
  

       I look forward to your concise algorithm!
csea, Sep 08 2009
  

       [+] for getting into number theory. That's one course I want to take when I go back to school.
sninctown, Sep 09 2009
  

       something to do with primes mebbe ?
FlyingToaster, Sep 09 2009
  

       what situations have you found a need for this algorithm?   

       Its pretty easy to write some code to do this. For example I have written a program in Octave to do this. (Octave is an open source equivalent of MATLAB). Here's the code:   

       function y = coinproblem (A, B, C)
Q = 0;
N = 130;
for x = 1 : N
Q = (A + x*B);
y = mod(Q, C);
if y == 0
x
endif
endfor
endfunction
  

       to use:
1. copy the above block of text into notepad
2. save it as coinproblem.m
3. run octave and in the command line type: coinproblem(A, B, C)
(where A, B, and C are the numbers in your formula)
4. It should output all integer values of x up to N. (obviously N can be adjusted to suit your needs).
xaviergisz, Sep 09 2009
  

       [WcW], sorry, but it doesn't work that way; there are a lot of intersection points, so the two lines you imagine are actually the same line. Also, since we are dealing with integers only, the word "lines" is not a good way of describing the sequences --that's why they can be the same "line" but also have lots of points where they don't intersect.   

       [xaviergisz], while the code you posted would work, it is not any more efficient than the manual calculator method I already described. Try doing the problem where B and C are 100-digit numbers, and you will see what I mean about the need for efficiency. (BTW, HalfBakery postings can include the HTML line-break code, so you don't need to press the Enter key twice to keep different short lines separated.)   

       [sninctown], The linear congruence theorem is not working with the (+) part of the problem I specified, so its algorithm doesn't work here, except when A is exactly divisible by C.
Vernon, Sep 09 2009
  

       I'm not sure why [WcW]'s suggestion isn't appropriate.   

       If you drew the line of A+(nB) (where n just iterates B) that's just a line with a gradient of nB that intersects with the Y axis at A at n (and X) = 0.   

       Then, to find whether the thing is divisible at C, you check out the position of the line A+(nB) at point C (and, if you wanted to, further iterations of C - effectively describing a line of gradient mC, with a Y axis intersection at m (and X) = 0) ensuring that the intersection point is at *inter-values* for X and Y.   

       Both lines are straight lines, and should only cross once, unless they both pass through point X=0 and Y=0 at which point they (should) lie on equivalent paths, and the results would be true for all values of n and m.
zen_tom, Sep 09 2009
  

       I might be wrong, but the (+) in the Linear Congruence Theorem applies to n, not to a or b. So this is the challenge you are trying to address, [Vernon] (I think).   

       Also, the algorithm that [xaviergisz] provides is not necessarily inefficient. It may be a more formal method of describing the calculator 'constant mode' approach that you've given, [Vernon], but the integer factorisation problems like this are normally dependent on integer size and have many steps in the calculations.
Jinbish, Sep 09 2009
  

       [bigsleep], I think I can disagree with your post on the grounds that since the linear congruence theorem is already associated with a solving algorithm, then if what you wrote was valid, it should be easy instead of difficult to factor large composite numbers. (Maybe the existing algorithm just doesn't work for that variation on the theme? I guess I'll have to try the thing I came up with, to see what happens.)   

       [zen_tom], the two lines have the same gradient, provable from the fact that if there is any integer solution, then they have infinite integer solutions (intersections). So, as as LINES they would intersect at all points --but they are actually sequences of integers, not lines, and so have only some points in common, not all points.   

       [Jinbish] I might have misinterpreted my quick scan of the linear congruence theorem, due to differences in notation. Let me see. If ax= b (mod n), then I interpret that to mean b is the remainder of the division ax/n, with the number of times n divides into ax unspecified (call it y). So ax=b+ny, and that is obviously equivalent to the title of this Idea, after shuffling various letters around:
ax=b+ny
b+ny=ax (algebraic manipulation)
b+ny=cx (replace "a" with "c")
a+ny=cx (replace "b" with "a")
a+ny=cz (replace "x" with "z")
a+nx=cz (replace "y" with "x")
a+bx=cz (replace "n" with "b")
a+bx=cy (replace "z" with "y")
(a+bx)/c=y (algebraic manipulation
  

       I suppose I should delete this, then...later.   

       Regarding [xaviergisz]'s code, it is a try-all-possibilities approach, and is certainly less efficient than the algorithm described in the Wikipedia article.
Vernon, Sep 09 2009
  

       I don't know how efficient you need it to be. I can come up with an algorithm that will find the base set of x with about 3C operations. If C is "hundreds of digits" then this might not be efficient enough.
ckiick, Sep 09 2009
  

       Could you add a third line, that looked like a pulse waveform, with an underlier at zero, and spikes going to infinity at each integer point - I'm not sure how you'd model that in a formula, but if you overlayed that shape over your graph, you've be able to solve it geometrically (i think)
zen_tom, Sep 09 2009
  

       //(After you find one such place, all other places are equal to that place, plus-or-minus various multiples of the quantity [(B*C)/(all common factors)].)//   

       I'd have thought the cycle time would be C/common factors.
We're looking for (n) mod C to be zero, and this can be at most C possibilities. If the remainder is a number we've seen before we can stop. So the most times we need to loop is C, in the case that there are no common factors.
Loris, Sep 09 2009
  

       [ckiick], well, the algorithm I came up with should be able to find a solution in a number of steps roughly comparable to which power of 2 is about the same size as C. I haven't studied the one in the Wikipedia article enough, yet, to fully understand it (due to the terminology used), but I suspect it is about as efficient as mine (I fear they may turn out to be nearly equivalent procedures, in fact).   

       [Loris], I think you are confusing the task of finding ANY intersection of the two sequences with the task of finding the infinite series of intersections. if some number z is the intersection number, then because there are two sequences involved, B+B+B... and C+C+C..., that is why both B and C are needed to specify the cycle of intersections (B*C). If they have a common factor F (always pick the greatest common factor), then the cycle of intersections will be z + (B*C)/F + (B*C)/F+ (B*C)/F + ... (can subtract from z also).   

       [bigsleep] regarding your post below, no, that wiki text you quoted appears to be referring to whole sets of problems, not a single problem like the one described in the main text here.
Vernon, Sep 09 2009
  

       I don't think I'm confused.   

       Suppose A=2, B=3, C=4.
The series (or rather a success test) goes: 2,
(2+3) %(mod) 4=1
(1+3) %4=0 (success)
  

       But to continue:
(0+3)%4=3
(3+3)%4=2
(2+3)%4=1
(1+3)%4=0
A cycle length of C, as I said.
Thus the number of steps needed is at most C, for the naive approach.
  

       It seems to me that you could do much better than that by skipping all the intermediate steps though. The modulus changes by a fixed amount each time, so all you need to do is determine when that hits zero.
It won't if there are common factors which are not a multiple of the offset A, or something like that.
Loris, Sep 09 2009
  

       Theodor Seuss Geisel.
skinflaps, Sep 09 2009
  

       [bigsleep], you have to have two relatively different equations for the substitution thing to work. The problem with your proposed (0+x*1)/C=y is that we are thereby specifying we want to work with the sequence 0+1+1+1+... so obviously when x=C, there will be that many "1"s in the sequence and then y will be an integer. By the way, if you struggle with an attempted algebraic solution to the factoring problem long enough, you can end up with some very nice combinations of variables that equal (x) --but when you actually assign numbers to those other variables and then do the arithmetic, you will end up with the indeterminate value (0/0). (I know because I've done it.) Alas.
Vernon, Sep 09 2009
  

       //while the code you posted would work, it is not any more efficient than the manual calculator method I already described//   

       OK, I just understood from your post that you actually did this calculation with calculators. It would have saved me a bit of time if you had posted the code you (presumably) already use.   

       //Try doing the problem where B and C are 100-digit numbers//   

       Seriously? I'm intrigued as to what application you have for this algorithm.
xaviergisz, Sep 09 2009
  

       [xaviergisz], over the last 4 years I've put a lot of spare time into twiddling with the generic factoring problem. That's not as much time as it sounds since I work 2 jobs, and still have to sleep, buy groceries, do laundry, etc. Smallish numbers are mostly OK for testing ideas, and I have indeed used calculators as described, because they happened to be available when a computer wasn't.   

       Somehow after 4 years I still haven't run out of schemes to try (don't know yet if the latest one is as worthless as all the other attempts, but working on it has kept me away from the HB recently). The current scheme needs this algorithm as a small part of its overall methodology, because, obviously, if the scheme actually worked then some big numbers would eventually be getting processed. Every part of ANY factoring scheme must be as efficient as possible!
Vernon, Sep 09 2009
  

       [vernon], here's another link, and another stab at it from me. I was confused before. Heck, I still am.   

       To solve the problem (maybe),
1. Rearrange to get A+xB=yC, and again to get -Bx+Cy=A
2. Now you need to find all x and y that satisfy the equation. If A is the greatest common divisor of B and C, use the Euclidean algorithm on the Bezout's identity Wikipedia page to find these.
3. If A is not the greatest common divisor, then rewrite x and y as x=k1*x1 and y=k2*y2. The equation becomes (-B*k1)x1+(C*k2)y2=A. Now find k1 and k2 so that A is the greatest common divisor of (-B*k1) and (C*k2), then use (-B*k1) and (C*k2) in the algorithm.
3b. To do this, find the prime factorization of -B,C, and A...I'm not so sure here, and I gotta go for now.
sninctown, Sep 09 2009
  

       OK, maybe I've got it. A/C + Bx/C = y; for y to be integer, the remainders of the two fractions must add up to 1. Using % as the modulo operation: A % C + Bx % C = C (or zero: C % C = 0) Rearranging: Bx % C = C - A % C; let m = C - A % C Then Bx % C = m - which is the form of a linear congruence. Technically, we should write m % C for the RHS, but it works out the same. Using the Euclidean algorithm, find d=gcd(B,C) A solution for x exists IF m % d = 0. That's the existence test.   

       The set of all solutions would be Xsub0 + k(m%d), where k is the set of integers. Xsub0 is any single value for x, although we'd prefer the smallest positive value. From what I've read, the extended Euclidean can provide one solution.   

       Efficiency: The existence test relies on the Euclidean algorithm, which is approximately 5 * log10(C). The extended Euclidean is similar, so the "big O" would be proportional to log10(C). That's pretty efficient.
ckiick, Sep 10 2009
  

       [ckiick], your phrasing "for y to be integer, the remainders of the two fractions must add up to 1" is problematic. I think it is more correct to say that the two fractions must add up to 1, OR the remainders must add up to C.   

       In any case, the particular environment in which I need the use the algorithm guarantees that there will always be a solution, so I don't have to worry about an existence test. That's obviously not going to be true of other usage-environments, of course.   

       I'll try to post my version of the algorithm later today. When working on it I did happen to think it somewhat resembled the Euclidean Algorithm, and that's the main reason why I didn't post it straight away, and instead asked if it was widely known. Well, it is obviously known, but "widely" seems hardly to be true.   

       So, after a week or two, depending on the annos that follow my updating of the main text here, we'll see about deleting or keeping this Idea.
Vernon, Sep 10 2009
  

       This is perhaps the most arcane thing I've ever seen here.
tatterdemalion, Sep 10 2009
  

       OK, folks, the algorithm I came up with has been posted to the main text here. [DELETED on Sept 29] If you copy it to a computer and try it out for different numbers, please remember that the code has no "existence of an answer" test, because I didn't need one for my own purposes (it could possibly enter an endless loop, until an "Overflow" error occurs).
Vernon, Sep 10 2009
  

       Just following up on sinctown's excellent comment; for the purposes of your formula, is the value of A predetermined, or can it be left as a variable?
xaviergisz, Sep 10 2009
  

       [xaviergisz], in the [DELETED Sept 29] program code I have a commented-out INPUT function near the start, which would allow random entries of A, B, and C. I merely hard-coded them in the next line for my own testing convenience, when transcribing the code from a printout, at a place where I didn't have the actual displayable-copyable file handy. I needed to give it a number of quick tests before posting, and entering INPUTs just wastes time in that situation.
Vernon, Sep 10 2009
  

       my confidence in this site is shaken.
WcW, Sep 10 2009
  

       Let me rephrase my question: In your formula (not your code), how do you arrive at the numbers A, B and C?   

       If A, B and C are independent/arbitrary, then I suspect that you're not going to find an algorithm much more efficient than your original algorithm (or mine). On the other hand, if A can always be chosen such that it is the greatest common divisor of B and C, then there appears to be a very efficient algorithm.
xaviergisz, Sep 11 2009
  

       do good ideas need my applause? I vote for what I like, I comment on what i don't.
WcW, Sep 11 2009
  

       I still think prime numbers figure into it somewhere.
FlyingToaster, Sep 11 2009
  

       [FlyingToaster], well, if A,B, and C are different prime numbers, there is always a solution to the problem. Other than that, though, primality is not particularly significant for anything else here.   

       [xaviergisz], see above. A,B, and C can be quite arbitrary (and not only primes). If there is a solution, I'm confident this algorithm can efficiently find it (with caveat that there may still need to be some tweaks done at or after this line of code: k = ABS(A + ((x + i) * B)) MOD C).
Vernon, Sep 11 2009
  

       //if A,B, and C are odd prime numbers, there is always a solution to the problem//   

       that's an interesting result, is there a proof?
Also, I think you need the caveat that A, B and C are unique (i.e. different numbers).
xaviergisz, Sep 11 2009
  

       [xaviergisz], heh, I had edited my text while you were working on a reply to what you quoted. See what the Wikipedia article has to say about various numbers being "coprime" (numbers that have no common factors; 14 and 15 are coprime even though neither is a prime).   

       In the main text of this Idea is this: "5+2+2+2+... will never be exactly divisible by 14", and that should be obvious since every member of the first series is odd and every multiple of 14 is even. It might even be obvious that NO even number (not just 14) can be C in that example. Similarly but not so obviously, 5+3+3+3+... will never be exactly divisble by 6 (or any other multiple of 3). More generically, if B is a factor of C but not also a factor of A, then there won't be a solution. But if all three numbers are prime or coprime, that scenario never arises.   

       The above not a proof; it's just some statements derived from inspection. But it may contain keys to a formal proof....
Vernon, Sep 11 2009
  

       If B _is_ a factor of C, but A and C are coprime, is there a solution?
bungston, Sep 11 2009
  

       [bungston], probably not. The two examples in my last post fit your descrption. (A=5, B=2 or 3, and C=14 or 6; A is coprime with C, and B is a factor of C) I suspect that as long as B and C are coprime, then it doesn't matter what A is; there will probably be a solution. And if B and C have a common factor, then A has to have that factor, too, for a solution to exist.
Vernon, Sep 11 2009
  

       Your above examples hinge on the oddness and evenness which is a property of the final digit. Multiples of a number change their final digit in a predictable sequence accoring to the final digit of the original number - the final digit sequence is the same for multiples of 56 and 123456. I think a final digit rule might be applied to (A+Bx) of any size that would exclude divisibility by C of any size.   

       Approaching a solution by triaging away nonsolutions is less elegant than one master equation. But if the method is serial application of brute force computing power, a one step rule to exclude possible candidates will speed up the process.
bungston, Sep 11 2009
  

       [bungston], for my own purposes I don't have to worry about situations where there is no answer to the problem, as I've stated before. I just need my code to work whenever there IS a solution.   

       I also suspect you are mistaken about the tail-end-digits thing. The example where B=3 is not about tail-end digits, it is about divisibility by 3. 5 is not, so any multiple of 3, added to 5, will not be divisible by 3. Therefore any C that includes 3 as a factor cannot evenly divide into 5+3+3+..., because the 3 portion-of-C won't divide evenly. That argument also works for the first example and divisibility by 2 (except common familiarity with odds and evens make pointing that out a more obvious explanation).   

       Regarding 56 and 123456, they share 8 as a factor, so if A is any number that includes 8 as a factor, then it doesn't matter which way the other two are assigned to B and C; there will be a solution, I think (example, 493808, if C=56 and A=16) --my program blew up assigning B and C the other way, looks like the algorithm needs a tweak! (I hope to have time tonight to study it in more detail.) But if A does not have 8 as a factor, I would not expect there to be a solution.
Vernon, Sep 11 2009
  

       Using your equation this is some pretty basic euclidean algebra and you are making it very very complicated. The simple fact (as discovered by Diophantus) is that if in   

       Bx-Cy=A   

       The GCD of B and C is not a factor of A then there is no integer solution and that if it is then there are an infinite number of solutions. I suggest you implement this method. After a minimal quantity of puttering around I haven't found the simple way to find the lowest integer value but it seems it would be in the vein of   

       Bx+A=0 (Mod Y)
WcW, Sep 11 2009
  

       Also things are much simpler if you use Ax + B = Cy. also if all three numbers are prime then there are solutions a "Diophantus exception" of sorts.
WcW, Sep 11 2009
  

       [WcW], thanks for the verification/clarification of what I was trying to describe from intuition.   

       My program blew up on A=16, B=56, C=123456 because the loop, written to expect a descent toward the value of 1, and which works perfectly when the inputs are coprime, found Zero first in this case (and of course could be expected to do so in other cases). Oops.   

       The program therefore qualifies as HalfBaked at this time, and belongs here, heh. I'll be back to edit the main text after I decide how I want to fix it.
Vernon, Sep 12 2009
  

       "Woah, this is Not the remedial math class."   

       <backs slowly out of the room, shutting the door with a barely audible click>   

       OK, folks, I think I'm done twiddling with the algorithm, and new code has been posted. As I write this it has so far been tested from 1, 1, and 1 as the parameters, to 300, 1000, 1000, and counting.
Vernon, Sep 29 2009
  

       ~ runs off to find someone with Excel to test this.
reensure, Sep 29 2009
  

       Wow.
shapu, Sep 30 2009
  

       OK, as indicated in the first part of the new program code, it is designed to do a billion loops, to test for solvable groups of numbers, and if so, to call the algorithm-function for both A and -A, and test the results of each for validity. I ran it since morning yesterday on a fairly fast computer and it finished up, without error, today in the midafternoon. We can probably consider the algorithm to be reliable.
Vernon, Sep 30 2009
  

       could you summarise how your algorithm works, and the elements that make it efficient?
xaviergisz, Sep 30 2009
  

       [xaviergisz], look at how the GCD Function works. One number is divided into another to obtain the "remainder" of the division, also known as the "modulus". The remainder is always smaller than the divisor. The Function then makes the remainder a divisor into the quotient, to obtain a new and even smaller remainder. Eventually the remainder will be Zero, either because the latest divisor was One, or because that divisor was some number larger than One, that happens to evenly divide both of the original numbers fed into the Function --either One or that other and larger number is the Greatest Common Divisor.   

       Note that the efficiency of that Function comes from the fact that the remainder of a division is pseudorandom, and therefore about half the time will be a number larger than half of the divisor, and about half the time will be a number smaller than half of the divisor. This means, on the average, when processing two numbers that are roughly equal to 2-to-the-hundredth-power (also very roughly 10-to-the-thirtieth-power), the Function will finish its work in about a hundred loops at most.   

       The "InterSeries" Function is efficient in much the same way, because it also uses remainders as divisors, until One is reached. The main difference is that it uses recursion instead of a simple loop, because in QBASIC that is an effective way to accumulate/save "clean-up" data, until that value of One has been found.   

       If you look at the middle of the Function, you will see a comment indicating that when One has been found, the Answer At That Point is "aa". That is, in the equation (A+xB=yC), if (C) is One then the answer is (A) --because (x) can be Zero and (y) can equal (A). After that point in the Function has been reached, all that needs to be done is, clean up the mess that was made while recursively getting there!   

       So most of the actual work of computing the final result of the Function is done during the clean-up; that is, there are several ways to modify the values that are fed into the Function, and each modification has an effect that must be compensated for ("cleaned up"). Each recursion can modify the values fed into it, all over again, so a whole sequence of clean-up operations, as each recursion finishes, eventually produces the final result of the Function.   

       In other computer languages than QBASIC it may be better to use the simple loop and accumulate the clean-up data in an array that grows (loops have less processing "overhead" than recursion). Also, some of the work done during each recursion (validity testing, checking for variable "A" being negative, looking for a common denominator of the three values) only needs to be done ONCE, and is wastefully repeated (that is, a Function that uses looping instead would not need to have that stuff inside the loop, and therefore would be even faster and more efficient).   

       I expect to be translating this Function to another such language in the near future. I mainly use QBASIC for algorithm development and tweaking, because it has good debugging abilities and no compiling is necessary --just tweak and run, tweak and run, tweak and run, until I get it working right. Even if it takes weeks or months, heh.
Vernon, Oct 01 2009
  

       L = 10   

       FOR A = 1 TO L   

       FOR B = 1 TO L   

       FOR C = 1 TO L   

       FOR x = 1 TO L   

       y = (A + x * B) / C   

       IF y = INT(y) THEN PRINT A, x, B, C, y   

       NEXT x, C, B, A
Your_Name_Here, Oct 04 2009
  

       REM Can you say "Modular Forms?"   

       REM "It goes to eleven."   

       CLS   

       L = 11000000   

       RANDOMIZE TIMER   

       11 A = INT(RND * L) + 1   

       B = INT(RND * L) + 1   

       C = INT(RND * L) + 1   

       x = INT(RND * L) + 1   

       y = (A + x * B) / C   

       IF y = INT(y) THEN PRINT "("; A; "+"; x; "*"; B; ") /"; C; "="; y   

       20 IF INKEY$ = "" THEN GOTO 11   

       END
Your_Name_Here, Oct 04 2009
  

       Suddenly, I realized I really didn't belong here...
XSarenkaX, Oct 08 2009
  

       What if I said that the 'C' actually stood for 'custard'?
Jinbish, Oct 08 2009
  

       f(x,y) = Cy-Bx   

       find the set of (x,y) pairs such that f(x,y) = A and x,y belong to the set of integers.   

       Problem restated?
Cuit_au_Four, Oct 11 2009
  

       is this still a work in progress?   

       Bezout.-Why is it that I couldn't find that wikipedia entry two weeks ago?
WcW, Oct 11 2009
  

       [WcW], the code I posted on Sept 29 in the main text appears to be a satisfactory solution. I've been thinking about whether or not to post the variant function that uses arrays and loops instead of recursion, but the fundamental algorithm is the same either way, so I haven't considered it important to do that. One reason FOR posting it involves an anno by [reensure] about trying the algorithm in Excel --I'm not sure about how Excel might do recursion (I wouldn't be surprised it it can, but...).   

       Regarding Bezout's Identity, if [sninctown]'s description is accurate, then it is merely a subset of the problem posed here.
Vernon, Oct 12 2009
  

       // I'm going to ask if there already is such an algorithm out there, to generically and efficiently find a place where an integer summation series such as A+B+B+B+... becomes exactly divisible by some other integer C //   

       Unless I'm misunderstanding the problem statement, this seems like a trivial variant of the least common multiple problem, which can be solved by several algorithms [link]. (A+xB)/C = y seems to me to be equivalent to y = LCM(A - C, B).   

       What was deleted from the idea on September 29th, 2009?
notexactly, Dec 06 2019
  
      
[annotate]
  


 

back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle