Sum of the reciprocals of radicals











up vote
12
down vote

favorite












Recall that the radical of an integer $n$ is defined to be $operatorname{rad}(n) = prod_{p mid n } p$.



For a paper, I need the result that
$$sum_{n leq x} frac{1}{operatorname{rad}(n)} ll_varepsilon x^{varepsilon} tag{$*$},$$
for all $varepsilon > 0$. I have a proof of this using complex analysis and Perron's formula, but this seems a bit overkill given that I'm looking for a weak upper bound for a problem in elementary number theory.




Does anyone know of a short elementary proof of the bound $(*)$? Or better yet, a reference?











share|cite|improve this question




















  • 4




    The solutions given are what is sometimes called "Rankin's trick", that is, multiplying a series $nle X$ by $(X/n)^alpha$ and optimizing $alpha$. My recollection is that getting an asymptotic for your sum is rather difficult (though again IIRC a log asymptotic is viable by the saddlepoint method).
    – literature-searcher
    Nov 15 at 19:03

















up vote
12
down vote

favorite












Recall that the radical of an integer $n$ is defined to be $operatorname{rad}(n) = prod_{p mid n } p$.



For a paper, I need the result that
$$sum_{n leq x} frac{1}{operatorname{rad}(n)} ll_varepsilon x^{varepsilon} tag{$*$},$$
for all $varepsilon > 0$. I have a proof of this using complex analysis and Perron's formula, but this seems a bit overkill given that I'm looking for a weak upper bound for a problem in elementary number theory.




Does anyone know of a short elementary proof of the bound $(*)$? Or better yet, a reference?











share|cite|improve this question




















  • 4




    The solutions given are what is sometimes called "Rankin's trick", that is, multiplying a series $nle X$ by $(X/n)^alpha$ and optimizing $alpha$. My recollection is that getting an asymptotic for your sum is rather difficult (though again IIRC a log asymptotic is viable by the saddlepoint method).
    – literature-searcher
    Nov 15 at 19:03















up vote
12
down vote

favorite









up vote
12
down vote

favorite











Recall that the radical of an integer $n$ is defined to be $operatorname{rad}(n) = prod_{p mid n } p$.



For a paper, I need the result that
$$sum_{n leq x} frac{1}{operatorname{rad}(n)} ll_varepsilon x^{varepsilon} tag{$*$},$$
for all $varepsilon > 0$. I have a proof of this using complex analysis and Perron's formula, but this seems a bit overkill given that I'm looking for a weak upper bound for a problem in elementary number theory.




Does anyone know of a short elementary proof of the bound $(*)$? Or better yet, a reference?











share|cite|improve this question















Recall that the radical of an integer $n$ is defined to be $operatorname{rad}(n) = prod_{p mid n } p$.



For a paper, I need the result that
$$sum_{n leq x} frac{1}{operatorname{rad}(n)} ll_varepsilon x^{varepsilon} tag{$*$},$$
for all $varepsilon > 0$. I have a proof of this using complex analysis and Perron's formula, but this seems a bit overkill given that I'm looking for a weak upper bound for a problem in elementary number theory.




Does anyone know of a short elementary proof of the bound $(*)$? Or better yet, a reference?








nt.number-theory reference-request






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Nov 15 at 23:01









Michael Hardy

5,53455383




5,53455383










asked Nov 15 at 15:57









Daniel Loughran

10.8k22468




10.8k22468








  • 4




    The solutions given are what is sometimes called "Rankin's trick", that is, multiplying a series $nle X$ by $(X/n)^alpha$ and optimizing $alpha$. My recollection is that getting an asymptotic for your sum is rather difficult (though again IIRC a log asymptotic is viable by the saddlepoint method).
    – literature-searcher
    Nov 15 at 19:03
















  • 4




    The solutions given are what is sometimes called "Rankin's trick", that is, multiplying a series $nle X$ by $(X/n)^alpha$ and optimizing $alpha$. My recollection is that getting an asymptotic for your sum is rather difficult (though again IIRC a log asymptotic is viable by the saddlepoint method).
    – literature-searcher
    Nov 15 at 19:03










4




4




The solutions given are what is sometimes called "Rankin's trick", that is, multiplying a series $nle X$ by $(X/n)^alpha$ and optimizing $alpha$. My recollection is that getting an asymptotic for your sum is rather difficult (though again IIRC a log asymptotic is viable by the saddlepoint method).
– literature-searcher
Nov 15 at 19:03






The solutions given are what is sometimes called "Rankin's trick", that is, multiplying a series $nle X$ by $(X/n)^alpha$ and optimizing $alpha$. My recollection is that getting an asymptotic for your sum is rather difficult (though again IIRC a log asymptotic is viable by the saddlepoint method).
– literature-searcher
Nov 15 at 19:03












5 Answers
5






active

oldest

votes

















up vote
12
down vote



accepted










You can get away with elementary analytic number theory. Consider the series $sum_nfrac{1}{n^{varepsilon}rm{rad}(n)}$. It suffices to show that it converges. However, it can be written as a product of
$$
S(p)=1+p^{-1-varepsilon}+p^{-1-2varepsilon}+dots=1+p^{-1-varepsilon}frac 1{1-p^{-varepsilon}}le 1+p^{-1-fracvarepsilon 2}
$$

for all but finitely many $p$.
Thus $prod_p S(p)le Cprod_p(1+p^{-1-fracvarepsilon 2})lesum_n n^{-1-fracvarepsilon 2}<+infty$






share|cite|improve this answer




























    up vote
    10
    down vote













    First, notice that for any squarefree $m$ and any $varepsilon>0$ we have



    notice that



    $$sum_{n:operatorname{rad}(n)=m} frac{1}{n^varepsilon}=m^{-varepsilon}prod_{pmid m}(1-p^{-varepsilon})^{-1}ll_varepsilon d(m)/m^varepsilon,$$



    thus, the series



    $$r(s)=sum_{n=1}^{+infty} frac{1}{n^smathrm{rad}(n)}$$



    converges absolutely when $mathrm{Re},s>0$. Now, using multiplicativity, one has



    $$r(s)=prod_p (1+p^{-s-1}+p^{-2s-1}+ldots)=prod_p (1+frac{1}{(1-p^{-s})p^{1+s}}).$$



    Next, notice that for positive $varepsilon$ we have $1-2^{-varepsilon}gg varepsilon$ and $1-p^{-varepsilon}geq varepsilon$ for $p>2$ and $varepsilon<1/6$. Therefore we deduce for any $varepsilon>0$



    $$r(varepsilon)ll prod_pleft(1+frac{1}{varepsilon p^{1+varepsilon}}right)leq zeta(1+varepsilon)^{1/varepsilon}.$$



    As $zeta(1+varepsilon)=frac{1}{varepsilon}+O(1)$, we finally obtain



    $$r(varepsilon)ll varepsilon^{-1/varepsilon}.$$



    Using Rankin trick we arrive at



    $$sum_{nleq x} frac{1}{mathrm{rad}(n)}ll x^varepsilon varepsilon^{-1/varepsilon}.$$



    Choosing $varepsilon=sqrt{frac{lnln x}{2ln x}}$ we prove that



    $$sum_{nleq x} frac{1}{mathrm{rad}(n)}leq exp(sqrt{(2+o(1))ln xlnln x}),$$



    which is a bit non-optimal by the answer of Don. (But at least we have the correct $lnln$ asymptotics)






    share|cite|improve this answer



















    • 1




      It would be good for you to edit this answer so that it is correct.
      – Lucia
      Nov 16 at 5:48


















    up vote
    8
    down vote













    de Bruijn studies this sum in "On the number of integers $le x$ whose prime factors divide $n$", which was published in a 1962 volume of the Illinois J. Math; see



    https://projecteuclid-org.proxy-remote.galib.uga.edu/euclid.ijm/1255631814



    He proves there (see Theorem 1) that $$sum_{n le x} frac{1}{mathrm{rad}(n)} = exp((1+o(1)) sqrt{8log{x}/loglog{x}}),$$ as $xtoinfty$. Of course, this implies the $O(x^{epsilon})$ bound you were after. However, his proof (which uses a Tauberian theorem of Hardy and Ramanujan) is not as elementary as some others that have been suggested here (but gives a more precise result).






    share|cite|improve this answer

















    • 3




      Wolfgang Schwarz refined de Bruijn's results at the Tauberian level, but I think it's still a log asymptotic (his ${mathcal R}$-function is delicate to deal with IIRC). He had three papers on the general subject, the second of which is the most relevant (I give all 3 links). digizeitschriften.de/dms/img/?PID=GDZPPN002181304 digizeitschriften.de/dms/img/?PID=GDZPPN002181339 digizeitschriften.de/en/dms/img/?PID=GDZPPN002182629
      – literature-searcher
      Nov 16 at 4:12




















    up vote
    2
    down vote













    sIt seems that this argument hasn't been presented yet, so I might as well include it.



    We can sort the integers $n in [1, X]$ by their radicals, which is necessarily a square-free integer $m$. Thus we have



    $$displaystyle sum_{n leq X} frac{1}{text{rad}(n)} = sum_{substack{m leq X \ m text{ square-free}}} frac{1}{m} sum_{substack{n leq X \ text{rad}(n) = m}} 1.$$



    Now, $text{rad}(n) = m$ if and only if $p | n Rightarrow p | m$. If we write $m = p_1 cdots p_k$, then



    $$displaystyle sum_{substack{n leq X \ text{rad}(n) = m}} 1 = #{(x_1, cdots, x_k) : x_i in mathbb{Z} cap [0,infty), p_1^{x_1} cdots p_k^{x_k} leq X/m}.$$



    The inequality defining the right hand side is equivalent to



    $$displaystyle x_1 log p_1 + cdots + x_k log p_k leq log(X/m),$$



    and this is just counting integer points with non-negative entries bounded by a simplex, and it is easy to see that



    $$displaystyle # {(x_1, cdots, x_k) : x_1 log p_1 + cdots + x_k log p_k leq log(X/m)} ll frac{log(X/m)}{prod_{1 leq i leq k} log(p_i)} ll log X.$$



    EDIT: This last step is wrong, but it can be fixed. Indeed, we can arrange the $p_i$'s so that $p_1 < p_2 < cdots < p_k$. It then follows from Davenport's lemma that



    $$displaystyle # {(x_1, cdots, x_k) : x_1 log p_1 + cdots + x_k log p_k leq log(X/m)} = O left(sum_{i=0}^k frac{(log X/m)^{k-i}}{prod_{1 leq j leq k-i} log p_i} right).$$



    It then follows that



    $$displaystyle sum_{n leq X} frac{1}{text{rad}(n)} ll sum_{substack{p_1 < cdots < p_k \ p_1 cdots p_k leq X}} sum_{i=0}^k frac{(log X)^{k-i}}{prod_{1 leq j leq k -i} p_i log p_i}.$$



    From here I think it is possible to get the bound $O_epsilon(X^epsilon)$, but it requires a somewhat more refined analysis on the interaction between the number of primes and the size of the primes.






    share|cite|improve this answer



















    • 4




      I worry a bit about the uniformity in the "easy to see that" estimate in the $p_j$. Still, a very natural approach.
      – Greg Martin
      Nov 16 at 0:50






    • 3




      Note that de Bruijn's estimate (quoted in my answer) shows that Greg's concern is a serious one: the sum is in fact not bounded by any fixed power of $log{X}$.
      – so-called friend Don
      Nov 16 at 3:36








    • 1




      The volume of the simplex should involve $(log(X/m))^k$ rather than just $log(X/m)$. This changes the bound dramatically.
      – Emil Jeřábek
      2 days ago










    • Using a correct formula for the volume, I get $sum_{nle X}frac1{mathrm{rad}(n)}leprod_{ple X}left(1+frac{log X}{plog p}right)$, which I believe can be bounded by $expleft(bigl(1+o(1)bigr)frac{log X}{loglog X}right)$.
      – Emil Jeřábek
      2 days ago












    • Now that I see it, this might begin to explain where Gerhard Paseman got his bound.
      – Emil Jeřábek
      2 days ago


















    up vote
    2
    down vote













    Here is another approach. Let $p_0$ be the largest prime with $(p_0)^{(e-1)p_0} leq x$. The desired sum is bounded above by $P =prod_{p}(1+lfloor log_p x rfloor/p)$, where the product is over primes $p$ less than or equal to $x$.



    When we pick out those terms of $P$ whose numerator is $k$, and consider the product of just those terms, we look at those primes with $p^k lt x leq p^{k+1}$ and the log of that product is bounded by $k$ times the sum $ S_k$ of $1/p$ over those primes. Mertens theorem gives $log((k+1)/k)$ as an approximate value for $S_k$ for small $k$, so the subproduct is approximated by $((k+1)/k)^k$. So for $k=1$ up to just before $(e-1)p_0$, we have broken the product over larger primes than $p_0$ into sub products each bounded by $e$.



    So we have an immediate upper bound on $P$ of $(1 + (log_2 x)/2)^{pi(p_0)}e^{(e-1)p_0}$. For $x$ not too small, this is less than $(log x)^{pi(p_0)}e^{(e-1)p_0}$. So far we have log of your sum is dominated by $log P$ which in turn is dominated by $(e-1)p_0 + pi(p_0)loglog x$. We want this last quantity to be asymptotically less than $epsilonlog x$.



    Well, $(e-1)p_0 leq (log x)/(log p_0)$, so $p_0 lt (log x)/f(x)$ for a function $f(x)$ which is slowly increasing. But $pi(p_0)$ is asymptotically $( (log x)/f(x))/(loglog x - log f(x))$, so the second term is only slightly bigger than $log(x)/f(x)$, but small enough to dip below $epsilonlog x$.



    If you put in some work, you find $f(x)$ is less than but close to $loglog x$, and far enough away for the fraction $(loglog x)/(loglog x - log f(x))$ not to be a problem. Although the prime number theorem and Mertens theorem on sum 1/p are used, this should be elementary enough.



    Observation 2018.11.16 Since a weak result is wanted, we can weaken some of the requirements: replace the prime number theorem by a result that bounds $pi(p)$ from above by $Ap/log p$ , and regroup the terms of the partial product $P$ into pieces each of which multiply to a number less than $e^2$. One should not need the full strength of Mertens for this. Or, follow the suggestion in the comment below and focus on the product of the biggest $pi(p_0)$ terms, and show the difference between this product and the sum is sufficiently small. End Observation 2018.11.16.



    Gerhard "For Some Value Of 'Enough'" Paseman, 2018.11.15.






    share|cite|improve this answer























    • One can also upper bound the sum by dividing it into two: one with terms where the radical includes primes bigger than p_0, and one with terms where the radical has no primes bigger than p_0. The argument above shows that the first part is less substantial than the second part. This suggests to me that looking at the second part (sum over p_0-smooth numbers) is more interesting and requires more delicacy. Gerhard "Waves Hands Over Hard Parts" Paseman, 2018.11.16.
      – Gerhard Paseman
      2 days ago











    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "504"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














     

    draft saved


    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathoverflow.net%2fquestions%2f315374%2fsum-of-the-reciprocals-of-radicals%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    5 Answers
    5






    active

    oldest

    votes








    5 Answers
    5






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    12
    down vote



    accepted










    You can get away with elementary analytic number theory. Consider the series $sum_nfrac{1}{n^{varepsilon}rm{rad}(n)}$. It suffices to show that it converges. However, it can be written as a product of
    $$
    S(p)=1+p^{-1-varepsilon}+p^{-1-2varepsilon}+dots=1+p^{-1-varepsilon}frac 1{1-p^{-varepsilon}}le 1+p^{-1-fracvarepsilon 2}
    $$

    for all but finitely many $p$.
    Thus $prod_p S(p)le Cprod_p(1+p^{-1-fracvarepsilon 2})lesum_n n^{-1-fracvarepsilon 2}<+infty$






    share|cite|improve this answer

























      up vote
      12
      down vote



      accepted










      You can get away with elementary analytic number theory. Consider the series $sum_nfrac{1}{n^{varepsilon}rm{rad}(n)}$. It suffices to show that it converges. However, it can be written as a product of
      $$
      S(p)=1+p^{-1-varepsilon}+p^{-1-2varepsilon}+dots=1+p^{-1-varepsilon}frac 1{1-p^{-varepsilon}}le 1+p^{-1-fracvarepsilon 2}
      $$

      for all but finitely many $p$.
      Thus $prod_p S(p)le Cprod_p(1+p^{-1-fracvarepsilon 2})lesum_n n^{-1-fracvarepsilon 2}<+infty$






      share|cite|improve this answer























        up vote
        12
        down vote



        accepted







        up vote
        12
        down vote



        accepted






        You can get away with elementary analytic number theory. Consider the series $sum_nfrac{1}{n^{varepsilon}rm{rad}(n)}$. It suffices to show that it converges. However, it can be written as a product of
        $$
        S(p)=1+p^{-1-varepsilon}+p^{-1-2varepsilon}+dots=1+p^{-1-varepsilon}frac 1{1-p^{-varepsilon}}le 1+p^{-1-fracvarepsilon 2}
        $$

        for all but finitely many $p$.
        Thus $prod_p S(p)le Cprod_p(1+p^{-1-fracvarepsilon 2})lesum_n n^{-1-fracvarepsilon 2}<+infty$






        share|cite|improve this answer












        You can get away with elementary analytic number theory. Consider the series $sum_nfrac{1}{n^{varepsilon}rm{rad}(n)}$. It suffices to show that it converges. However, it can be written as a product of
        $$
        S(p)=1+p^{-1-varepsilon}+p^{-1-2varepsilon}+dots=1+p^{-1-varepsilon}frac 1{1-p^{-varepsilon}}le 1+p^{-1-fracvarepsilon 2}
        $$

        for all but finitely many $p$.
        Thus $prod_p S(p)le Cprod_p(1+p^{-1-fracvarepsilon 2})lesum_n n^{-1-fracvarepsilon 2}<+infty$







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered Nov 15 at 17:05









        fedja

        36.6k7105200




        36.6k7105200






















            up vote
            10
            down vote













            First, notice that for any squarefree $m$ and any $varepsilon>0$ we have



            notice that



            $$sum_{n:operatorname{rad}(n)=m} frac{1}{n^varepsilon}=m^{-varepsilon}prod_{pmid m}(1-p^{-varepsilon})^{-1}ll_varepsilon d(m)/m^varepsilon,$$



            thus, the series



            $$r(s)=sum_{n=1}^{+infty} frac{1}{n^smathrm{rad}(n)}$$



            converges absolutely when $mathrm{Re},s>0$. Now, using multiplicativity, one has



            $$r(s)=prod_p (1+p^{-s-1}+p^{-2s-1}+ldots)=prod_p (1+frac{1}{(1-p^{-s})p^{1+s}}).$$



            Next, notice that for positive $varepsilon$ we have $1-2^{-varepsilon}gg varepsilon$ and $1-p^{-varepsilon}geq varepsilon$ for $p>2$ and $varepsilon<1/6$. Therefore we deduce for any $varepsilon>0$



            $$r(varepsilon)ll prod_pleft(1+frac{1}{varepsilon p^{1+varepsilon}}right)leq zeta(1+varepsilon)^{1/varepsilon}.$$



            As $zeta(1+varepsilon)=frac{1}{varepsilon}+O(1)$, we finally obtain



            $$r(varepsilon)ll varepsilon^{-1/varepsilon}.$$



            Using Rankin trick we arrive at



            $$sum_{nleq x} frac{1}{mathrm{rad}(n)}ll x^varepsilon varepsilon^{-1/varepsilon}.$$



            Choosing $varepsilon=sqrt{frac{lnln x}{2ln x}}$ we prove that



            $$sum_{nleq x} frac{1}{mathrm{rad}(n)}leq exp(sqrt{(2+o(1))ln xlnln x}),$$



            which is a bit non-optimal by the answer of Don. (But at least we have the correct $lnln$ asymptotics)






            share|cite|improve this answer



















            • 1




              It would be good for you to edit this answer so that it is correct.
              – Lucia
              Nov 16 at 5:48















            up vote
            10
            down vote













            First, notice that for any squarefree $m$ and any $varepsilon>0$ we have



            notice that



            $$sum_{n:operatorname{rad}(n)=m} frac{1}{n^varepsilon}=m^{-varepsilon}prod_{pmid m}(1-p^{-varepsilon})^{-1}ll_varepsilon d(m)/m^varepsilon,$$



            thus, the series



            $$r(s)=sum_{n=1}^{+infty} frac{1}{n^smathrm{rad}(n)}$$



            converges absolutely when $mathrm{Re},s>0$. Now, using multiplicativity, one has



            $$r(s)=prod_p (1+p^{-s-1}+p^{-2s-1}+ldots)=prod_p (1+frac{1}{(1-p^{-s})p^{1+s}}).$$



            Next, notice that for positive $varepsilon$ we have $1-2^{-varepsilon}gg varepsilon$ and $1-p^{-varepsilon}geq varepsilon$ for $p>2$ and $varepsilon<1/6$. Therefore we deduce for any $varepsilon>0$



            $$r(varepsilon)ll prod_pleft(1+frac{1}{varepsilon p^{1+varepsilon}}right)leq zeta(1+varepsilon)^{1/varepsilon}.$$



            As $zeta(1+varepsilon)=frac{1}{varepsilon}+O(1)$, we finally obtain



            $$r(varepsilon)ll varepsilon^{-1/varepsilon}.$$



            Using Rankin trick we arrive at



            $$sum_{nleq x} frac{1}{mathrm{rad}(n)}ll x^varepsilon varepsilon^{-1/varepsilon}.$$



            Choosing $varepsilon=sqrt{frac{lnln x}{2ln x}}$ we prove that



            $$sum_{nleq x} frac{1}{mathrm{rad}(n)}leq exp(sqrt{(2+o(1))ln xlnln x}),$$



            which is a bit non-optimal by the answer of Don. (But at least we have the correct $lnln$ asymptotics)






            share|cite|improve this answer



















            • 1




              It would be good for you to edit this answer so that it is correct.
              – Lucia
              Nov 16 at 5:48













            up vote
            10
            down vote










            up vote
            10
            down vote









            First, notice that for any squarefree $m$ and any $varepsilon>0$ we have



            notice that



            $$sum_{n:operatorname{rad}(n)=m} frac{1}{n^varepsilon}=m^{-varepsilon}prod_{pmid m}(1-p^{-varepsilon})^{-1}ll_varepsilon d(m)/m^varepsilon,$$



            thus, the series



            $$r(s)=sum_{n=1}^{+infty} frac{1}{n^smathrm{rad}(n)}$$



            converges absolutely when $mathrm{Re},s>0$. Now, using multiplicativity, one has



            $$r(s)=prod_p (1+p^{-s-1}+p^{-2s-1}+ldots)=prod_p (1+frac{1}{(1-p^{-s})p^{1+s}}).$$



            Next, notice that for positive $varepsilon$ we have $1-2^{-varepsilon}gg varepsilon$ and $1-p^{-varepsilon}geq varepsilon$ for $p>2$ and $varepsilon<1/6$. Therefore we deduce for any $varepsilon>0$



            $$r(varepsilon)ll prod_pleft(1+frac{1}{varepsilon p^{1+varepsilon}}right)leq zeta(1+varepsilon)^{1/varepsilon}.$$



            As $zeta(1+varepsilon)=frac{1}{varepsilon}+O(1)$, we finally obtain



            $$r(varepsilon)ll varepsilon^{-1/varepsilon}.$$



            Using Rankin trick we arrive at



            $$sum_{nleq x} frac{1}{mathrm{rad}(n)}ll x^varepsilon varepsilon^{-1/varepsilon}.$$



            Choosing $varepsilon=sqrt{frac{lnln x}{2ln x}}$ we prove that



            $$sum_{nleq x} frac{1}{mathrm{rad}(n)}leq exp(sqrt{(2+o(1))ln xlnln x}),$$



            which is a bit non-optimal by the answer of Don. (But at least we have the correct $lnln$ asymptotics)






            share|cite|improve this answer














            First, notice that for any squarefree $m$ and any $varepsilon>0$ we have



            notice that



            $$sum_{n:operatorname{rad}(n)=m} frac{1}{n^varepsilon}=m^{-varepsilon}prod_{pmid m}(1-p^{-varepsilon})^{-1}ll_varepsilon d(m)/m^varepsilon,$$



            thus, the series



            $$r(s)=sum_{n=1}^{+infty} frac{1}{n^smathrm{rad}(n)}$$



            converges absolutely when $mathrm{Re},s>0$. Now, using multiplicativity, one has



            $$r(s)=prod_p (1+p^{-s-1}+p^{-2s-1}+ldots)=prod_p (1+frac{1}{(1-p^{-s})p^{1+s}}).$$



            Next, notice that for positive $varepsilon$ we have $1-2^{-varepsilon}gg varepsilon$ and $1-p^{-varepsilon}geq varepsilon$ for $p>2$ and $varepsilon<1/6$. Therefore we deduce for any $varepsilon>0$



            $$r(varepsilon)ll prod_pleft(1+frac{1}{varepsilon p^{1+varepsilon}}right)leq zeta(1+varepsilon)^{1/varepsilon}.$$



            As $zeta(1+varepsilon)=frac{1}{varepsilon}+O(1)$, we finally obtain



            $$r(varepsilon)ll varepsilon^{-1/varepsilon}.$$



            Using Rankin trick we arrive at



            $$sum_{nleq x} frac{1}{mathrm{rad}(n)}ll x^varepsilon varepsilon^{-1/varepsilon}.$$



            Choosing $varepsilon=sqrt{frac{lnln x}{2ln x}}$ we prove that



            $$sum_{nleq x} frac{1}{mathrm{rad}(n)}leq exp(sqrt{(2+o(1))ln xlnln x}),$$



            which is a bit non-optimal by the answer of Don. (But at least we have the correct $lnln$ asymptotics)







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited 2 days ago

























            answered Nov 15 at 16:59









            Asymptotiac K

            1,2241313




            1,2241313








            • 1




              It would be good for you to edit this answer so that it is correct.
              – Lucia
              Nov 16 at 5:48














            • 1




              It would be good for you to edit this answer so that it is correct.
              – Lucia
              Nov 16 at 5:48








            1




            1




            It would be good for you to edit this answer so that it is correct.
            – Lucia
            Nov 16 at 5:48




            It would be good for you to edit this answer so that it is correct.
            – Lucia
            Nov 16 at 5:48










            up vote
            8
            down vote













            de Bruijn studies this sum in "On the number of integers $le x$ whose prime factors divide $n$", which was published in a 1962 volume of the Illinois J. Math; see



            https://projecteuclid-org.proxy-remote.galib.uga.edu/euclid.ijm/1255631814



            He proves there (see Theorem 1) that $$sum_{n le x} frac{1}{mathrm{rad}(n)} = exp((1+o(1)) sqrt{8log{x}/loglog{x}}),$$ as $xtoinfty$. Of course, this implies the $O(x^{epsilon})$ bound you were after. However, his proof (which uses a Tauberian theorem of Hardy and Ramanujan) is not as elementary as some others that have been suggested here (but gives a more precise result).






            share|cite|improve this answer

















            • 3




              Wolfgang Schwarz refined de Bruijn's results at the Tauberian level, but I think it's still a log asymptotic (his ${mathcal R}$-function is delicate to deal with IIRC). He had three papers on the general subject, the second of which is the most relevant (I give all 3 links). digizeitschriften.de/dms/img/?PID=GDZPPN002181304 digizeitschriften.de/dms/img/?PID=GDZPPN002181339 digizeitschriften.de/en/dms/img/?PID=GDZPPN002182629
              – literature-searcher
              Nov 16 at 4:12

















            up vote
            8
            down vote













            de Bruijn studies this sum in "On the number of integers $le x$ whose prime factors divide $n$", which was published in a 1962 volume of the Illinois J. Math; see



            https://projecteuclid-org.proxy-remote.galib.uga.edu/euclid.ijm/1255631814



            He proves there (see Theorem 1) that $$sum_{n le x} frac{1}{mathrm{rad}(n)} = exp((1+o(1)) sqrt{8log{x}/loglog{x}}),$$ as $xtoinfty$. Of course, this implies the $O(x^{epsilon})$ bound you were after. However, his proof (which uses a Tauberian theorem of Hardy and Ramanujan) is not as elementary as some others that have been suggested here (but gives a more precise result).






            share|cite|improve this answer

















            • 3




              Wolfgang Schwarz refined de Bruijn's results at the Tauberian level, but I think it's still a log asymptotic (his ${mathcal R}$-function is delicate to deal with IIRC). He had three papers on the general subject, the second of which is the most relevant (I give all 3 links). digizeitschriften.de/dms/img/?PID=GDZPPN002181304 digizeitschriften.de/dms/img/?PID=GDZPPN002181339 digizeitschriften.de/en/dms/img/?PID=GDZPPN002182629
              – literature-searcher
              Nov 16 at 4:12















            up vote
            8
            down vote










            up vote
            8
            down vote









            de Bruijn studies this sum in "On the number of integers $le x$ whose prime factors divide $n$", which was published in a 1962 volume of the Illinois J. Math; see



            https://projecteuclid-org.proxy-remote.galib.uga.edu/euclid.ijm/1255631814



            He proves there (see Theorem 1) that $$sum_{n le x} frac{1}{mathrm{rad}(n)} = exp((1+o(1)) sqrt{8log{x}/loglog{x}}),$$ as $xtoinfty$. Of course, this implies the $O(x^{epsilon})$ bound you were after. However, his proof (which uses a Tauberian theorem of Hardy and Ramanujan) is not as elementary as some others that have been suggested here (but gives a more precise result).






            share|cite|improve this answer












            de Bruijn studies this sum in "On the number of integers $le x$ whose prime factors divide $n$", which was published in a 1962 volume of the Illinois J. Math; see



            https://projecteuclid-org.proxy-remote.galib.uga.edu/euclid.ijm/1255631814



            He proves there (see Theorem 1) that $$sum_{n le x} frac{1}{mathrm{rad}(n)} = exp((1+o(1)) sqrt{8log{x}/loglog{x}}),$$ as $xtoinfty$. Of course, this implies the $O(x^{epsilon})$ bound you were after. However, his proof (which uses a Tauberian theorem of Hardy and Ramanujan) is not as elementary as some others that have been suggested here (but gives a more precise result).







            share|cite|improve this answer












            share|cite|improve this answer



            share|cite|improve this answer










            answered Nov 16 at 3:22









            so-called friend Don

            4,97811720




            4,97811720








            • 3




              Wolfgang Schwarz refined de Bruijn's results at the Tauberian level, but I think it's still a log asymptotic (his ${mathcal R}$-function is delicate to deal with IIRC). He had three papers on the general subject, the second of which is the most relevant (I give all 3 links). digizeitschriften.de/dms/img/?PID=GDZPPN002181304 digizeitschriften.de/dms/img/?PID=GDZPPN002181339 digizeitschriften.de/en/dms/img/?PID=GDZPPN002182629
              – literature-searcher
              Nov 16 at 4:12
















            • 3




              Wolfgang Schwarz refined de Bruijn's results at the Tauberian level, but I think it's still a log asymptotic (his ${mathcal R}$-function is delicate to deal with IIRC). He had three papers on the general subject, the second of which is the most relevant (I give all 3 links). digizeitschriften.de/dms/img/?PID=GDZPPN002181304 digizeitschriften.de/dms/img/?PID=GDZPPN002181339 digizeitschriften.de/en/dms/img/?PID=GDZPPN002182629
              – literature-searcher
              Nov 16 at 4:12










            3




            3




            Wolfgang Schwarz refined de Bruijn's results at the Tauberian level, but I think it's still a log asymptotic (his ${mathcal R}$-function is delicate to deal with IIRC). He had three papers on the general subject, the second of which is the most relevant (I give all 3 links). digizeitschriften.de/dms/img/?PID=GDZPPN002181304 digizeitschriften.de/dms/img/?PID=GDZPPN002181339 digizeitschriften.de/en/dms/img/?PID=GDZPPN002182629
            – literature-searcher
            Nov 16 at 4:12






            Wolfgang Schwarz refined de Bruijn's results at the Tauberian level, but I think it's still a log asymptotic (his ${mathcal R}$-function is delicate to deal with IIRC). He had three papers on the general subject, the second of which is the most relevant (I give all 3 links). digizeitschriften.de/dms/img/?PID=GDZPPN002181304 digizeitschriften.de/dms/img/?PID=GDZPPN002181339 digizeitschriften.de/en/dms/img/?PID=GDZPPN002182629
            – literature-searcher
            Nov 16 at 4:12












            up vote
            2
            down vote













            sIt seems that this argument hasn't been presented yet, so I might as well include it.



            We can sort the integers $n in [1, X]$ by their radicals, which is necessarily a square-free integer $m$. Thus we have



            $$displaystyle sum_{n leq X} frac{1}{text{rad}(n)} = sum_{substack{m leq X \ m text{ square-free}}} frac{1}{m} sum_{substack{n leq X \ text{rad}(n) = m}} 1.$$



            Now, $text{rad}(n) = m$ if and only if $p | n Rightarrow p | m$. If we write $m = p_1 cdots p_k$, then



            $$displaystyle sum_{substack{n leq X \ text{rad}(n) = m}} 1 = #{(x_1, cdots, x_k) : x_i in mathbb{Z} cap [0,infty), p_1^{x_1} cdots p_k^{x_k} leq X/m}.$$



            The inequality defining the right hand side is equivalent to



            $$displaystyle x_1 log p_1 + cdots + x_k log p_k leq log(X/m),$$



            and this is just counting integer points with non-negative entries bounded by a simplex, and it is easy to see that



            $$displaystyle # {(x_1, cdots, x_k) : x_1 log p_1 + cdots + x_k log p_k leq log(X/m)} ll frac{log(X/m)}{prod_{1 leq i leq k} log(p_i)} ll log X.$$



            EDIT: This last step is wrong, but it can be fixed. Indeed, we can arrange the $p_i$'s so that $p_1 < p_2 < cdots < p_k$. It then follows from Davenport's lemma that



            $$displaystyle # {(x_1, cdots, x_k) : x_1 log p_1 + cdots + x_k log p_k leq log(X/m)} = O left(sum_{i=0}^k frac{(log X/m)^{k-i}}{prod_{1 leq j leq k-i} log p_i} right).$$



            It then follows that



            $$displaystyle sum_{n leq X} frac{1}{text{rad}(n)} ll sum_{substack{p_1 < cdots < p_k \ p_1 cdots p_k leq X}} sum_{i=0}^k frac{(log X)^{k-i}}{prod_{1 leq j leq k -i} p_i log p_i}.$$



            From here I think it is possible to get the bound $O_epsilon(X^epsilon)$, but it requires a somewhat more refined analysis on the interaction between the number of primes and the size of the primes.






            share|cite|improve this answer



















            • 4




              I worry a bit about the uniformity in the "easy to see that" estimate in the $p_j$. Still, a very natural approach.
              – Greg Martin
              Nov 16 at 0:50






            • 3




              Note that de Bruijn's estimate (quoted in my answer) shows that Greg's concern is a serious one: the sum is in fact not bounded by any fixed power of $log{X}$.
              – so-called friend Don
              Nov 16 at 3:36








            • 1




              The volume of the simplex should involve $(log(X/m))^k$ rather than just $log(X/m)$. This changes the bound dramatically.
              – Emil Jeřábek
              2 days ago










            • Using a correct formula for the volume, I get $sum_{nle X}frac1{mathrm{rad}(n)}leprod_{ple X}left(1+frac{log X}{plog p}right)$, which I believe can be bounded by $expleft(bigl(1+o(1)bigr)frac{log X}{loglog X}right)$.
              – Emil Jeřábek
              2 days ago












            • Now that I see it, this might begin to explain where Gerhard Paseman got his bound.
              – Emil Jeřábek
              2 days ago















            up vote
            2
            down vote













            sIt seems that this argument hasn't been presented yet, so I might as well include it.



            We can sort the integers $n in [1, X]$ by their radicals, which is necessarily a square-free integer $m$. Thus we have



            $$displaystyle sum_{n leq X} frac{1}{text{rad}(n)} = sum_{substack{m leq X \ m text{ square-free}}} frac{1}{m} sum_{substack{n leq X \ text{rad}(n) = m}} 1.$$



            Now, $text{rad}(n) = m$ if and only if $p | n Rightarrow p | m$. If we write $m = p_1 cdots p_k$, then



            $$displaystyle sum_{substack{n leq X \ text{rad}(n) = m}} 1 = #{(x_1, cdots, x_k) : x_i in mathbb{Z} cap [0,infty), p_1^{x_1} cdots p_k^{x_k} leq X/m}.$$



            The inequality defining the right hand side is equivalent to



            $$displaystyle x_1 log p_1 + cdots + x_k log p_k leq log(X/m),$$



            and this is just counting integer points with non-negative entries bounded by a simplex, and it is easy to see that



            $$displaystyle # {(x_1, cdots, x_k) : x_1 log p_1 + cdots + x_k log p_k leq log(X/m)} ll frac{log(X/m)}{prod_{1 leq i leq k} log(p_i)} ll log X.$$



            EDIT: This last step is wrong, but it can be fixed. Indeed, we can arrange the $p_i$'s so that $p_1 < p_2 < cdots < p_k$. It then follows from Davenport's lemma that



            $$displaystyle # {(x_1, cdots, x_k) : x_1 log p_1 + cdots + x_k log p_k leq log(X/m)} = O left(sum_{i=0}^k frac{(log X/m)^{k-i}}{prod_{1 leq j leq k-i} log p_i} right).$$



            It then follows that



            $$displaystyle sum_{n leq X} frac{1}{text{rad}(n)} ll sum_{substack{p_1 < cdots < p_k \ p_1 cdots p_k leq X}} sum_{i=0}^k frac{(log X)^{k-i}}{prod_{1 leq j leq k -i} p_i log p_i}.$$



            From here I think it is possible to get the bound $O_epsilon(X^epsilon)$, but it requires a somewhat more refined analysis on the interaction between the number of primes and the size of the primes.






            share|cite|improve this answer



















            • 4




              I worry a bit about the uniformity in the "easy to see that" estimate in the $p_j$. Still, a very natural approach.
              – Greg Martin
              Nov 16 at 0:50






            • 3




              Note that de Bruijn's estimate (quoted in my answer) shows that Greg's concern is a serious one: the sum is in fact not bounded by any fixed power of $log{X}$.
              – so-called friend Don
              Nov 16 at 3:36








            • 1




              The volume of the simplex should involve $(log(X/m))^k$ rather than just $log(X/m)$. This changes the bound dramatically.
              – Emil Jeřábek
              2 days ago










            • Using a correct formula for the volume, I get $sum_{nle X}frac1{mathrm{rad}(n)}leprod_{ple X}left(1+frac{log X}{plog p}right)$, which I believe can be bounded by $expleft(bigl(1+o(1)bigr)frac{log X}{loglog X}right)$.
              – Emil Jeřábek
              2 days ago












            • Now that I see it, this might begin to explain where Gerhard Paseman got his bound.
              – Emil Jeřábek
              2 days ago













            up vote
            2
            down vote










            up vote
            2
            down vote









            sIt seems that this argument hasn't been presented yet, so I might as well include it.



            We can sort the integers $n in [1, X]$ by their radicals, which is necessarily a square-free integer $m$. Thus we have



            $$displaystyle sum_{n leq X} frac{1}{text{rad}(n)} = sum_{substack{m leq X \ m text{ square-free}}} frac{1}{m} sum_{substack{n leq X \ text{rad}(n) = m}} 1.$$



            Now, $text{rad}(n) = m$ if and only if $p | n Rightarrow p | m$. If we write $m = p_1 cdots p_k$, then



            $$displaystyle sum_{substack{n leq X \ text{rad}(n) = m}} 1 = #{(x_1, cdots, x_k) : x_i in mathbb{Z} cap [0,infty), p_1^{x_1} cdots p_k^{x_k} leq X/m}.$$



            The inequality defining the right hand side is equivalent to



            $$displaystyle x_1 log p_1 + cdots + x_k log p_k leq log(X/m),$$



            and this is just counting integer points with non-negative entries bounded by a simplex, and it is easy to see that



            $$displaystyle # {(x_1, cdots, x_k) : x_1 log p_1 + cdots + x_k log p_k leq log(X/m)} ll frac{log(X/m)}{prod_{1 leq i leq k} log(p_i)} ll log X.$$



            EDIT: This last step is wrong, but it can be fixed. Indeed, we can arrange the $p_i$'s so that $p_1 < p_2 < cdots < p_k$. It then follows from Davenport's lemma that



            $$displaystyle # {(x_1, cdots, x_k) : x_1 log p_1 + cdots + x_k log p_k leq log(X/m)} = O left(sum_{i=0}^k frac{(log X/m)^{k-i}}{prod_{1 leq j leq k-i} log p_i} right).$$



            It then follows that



            $$displaystyle sum_{n leq X} frac{1}{text{rad}(n)} ll sum_{substack{p_1 < cdots < p_k \ p_1 cdots p_k leq X}} sum_{i=0}^k frac{(log X)^{k-i}}{prod_{1 leq j leq k -i} p_i log p_i}.$$



            From here I think it is possible to get the bound $O_epsilon(X^epsilon)$, but it requires a somewhat more refined analysis on the interaction between the number of primes and the size of the primes.






            share|cite|improve this answer














            sIt seems that this argument hasn't been presented yet, so I might as well include it.



            We can sort the integers $n in [1, X]$ by their radicals, which is necessarily a square-free integer $m$. Thus we have



            $$displaystyle sum_{n leq X} frac{1}{text{rad}(n)} = sum_{substack{m leq X \ m text{ square-free}}} frac{1}{m} sum_{substack{n leq X \ text{rad}(n) = m}} 1.$$



            Now, $text{rad}(n) = m$ if and only if $p | n Rightarrow p | m$. If we write $m = p_1 cdots p_k$, then



            $$displaystyle sum_{substack{n leq X \ text{rad}(n) = m}} 1 = #{(x_1, cdots, x_k) : x_i in mathbb{Z} cap [0,infty), p_1^{x_1} cdots p_k^{x_k} leq X/m}.$$



            The inequality defining the right hand side is equivalent to



            $$displaystyle x_1 log p_1 + cdots + x_k log p_k leq log(X/m),$$



            and this is just counting integer points with non-negative entries bounded by a simplex, and it is easy to see that



            $$displaystyle # {(x_1, cdots, x_k) : x_1 log p_1 + cdots + x_k log p_k leq log(X/m)} ll frac{log(X/m)}{prod_{1 leq i leq k} log(p_i)} ll log X.$$



            EDIT: This last step is wrong, but it can be fixed. Indeed, we can arrange the $p_i$'s so that $p_1 < p_2 < cdots < p_k$. It then follows from Davenport's lemma that



            $$displaystyle # {(x_1, cdots, x_k) : x_1 log p_1 + cdots + x_k log p_k leq log(X/m)} = O left(sum_{i=0}^k frac{(log X/m)^{k-i}}{prod_{1 leq j leq k-i} log p_i} right).$$



            It then follows that



            $$displaystyle sum_{n leq X} frac{1}{text{rad}(n)} ll sum_{substack{p_1 < cdots < p_k \ p_1 cdots p_k leq X}} sum_{i=0}^k frac{(log X)^{k-i}}{prod_{1 leq j leq k -i} p_i log p_i}.$$



            From here I think it is possible to get the bound $O_epsilon(X^epsilon)$, but it requires a somewhat more refined analysis on the interaction between the number of primes and the size of the primes.







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited 2 days ago

























            answered Nov 16 at 0:49









            Stanley Yao Xiao

            8,27942783




            8,27942783








            • 4




              I worry a bit about the uniformity in the "easy to see that" estimate in the $p_j$. Still, a very natural approach.
              – Greg Martin
              Nov 16 at 0:50






            • 3




              Note that de Bruijn's estimate (quoted in my answer) shows that Greg's concern is a serious one: the sum is in fact not bounded by any fixed power of $log{X}$.
              – so-called friend Don
              Nov 16 at 3:36








            • 1




              The volume of the simplex should involve $(log(X/m))^k$ rather than just $log(X/m)$. This changes the bound dramatically.
              – Emil Jeřábek
              2 days ago










            • Using a correct formula for the volume, I get $sum_{nle X}frac1{mathrm{rad}(n)}leprod_{ple X}left(1+frac{log X}{plog p}right)$, which I believe can be bounded by $expleft(bigl(1+o(1)bigr)frac{log X}{loglog X}right)$.
              – Emil Jeřábek
              2 days ago












            • Now that I see it, this might begin to explain where Gerhard Paseman got his bound.
              – Emil Jeřábek
              2 days ago














            • 4




              I worry a bit about the uniformity in the "easy to see that" estimate in the $p_j$. Still, a very natural approach.
              – Greg Martin
              Nov 16 at 0:50






            • 3




              Note that de Bruijn's estimate (quoted in my answer) shows that Greg's concern is a serious one: the sum is in fact not bounded by any fixed power of $log{X}$.
              – so-called friend Don
              Nov 16 at 3:36








            • 1




              The volume of the simplex should involve $(log(X/m))^k$ rather than just $log(X/m)$. This changes the bound dramatically.
              – Emil Jeřábek
              2 days ago










            • Using a correct formula for the volume, I get $sum_{nle X}frac1{mathrm{rad}(n)}leprod_{ple X}left(1+frac{log X}{plog p}right)$, which I believe can be bounded by $expleft(bigl(1+o(1)bigr)frac{log X}{loglog X}right)$.
              – Emil Jeřábek
              2 days ago












            • Now that I see it, this might begin to explain where Gerhard Paseman got his bound.
              – Emil Jeřábek
              2 days ago








            4




            4




            I worry a bit about the uniformity in the "easy to see that" estimate in the $p_j$. Still, a very natural approach.
            – Greg Martin
            Nov 16 at 0:50




            I worry a bit about the uniformity in the "easy to see that" estimate in the $p_j$. Still, a very natural approach.
            – Greg Martin
            Nov 16 at 0:50




            3




            3




            Note that de Bruijn's estimate (quoted in my answer) shows that Greg's concern is a serious one: the sum is in fact not bounded by any fixed power of $log{X}$.
            – so-called friend Don
            Nov 16 at 3:36






            Note that de Bruijn's estimate (quoted in my answer) shows that Greg's concern is a serious one: the sum is in fact not bounded by any fixed power of $log{X}$.
            – so-called friend Don
            Nov 16 at 3:36






            1




            1




            The volume of the simplex should involve $(log(X/m))^k$ rather than just $log(X/m)$. This changes the bound dramatically.
            – Emil Jeřábek
            2 days ago




            The volume of the simplex should involve $(log(X/m))^k$ rather than just $log(X/m)$. This changes the bound dramatically.
            – Emil Jeřábek
            2 days ago












            Using a correct formula for the volume, I get $sum_{nle X}frac1{mathrm{rad}(n)}leprod_{ple X}left(1+frac{log X}{plog p}right)$, which I believe can be bounded by $expleft(bigl(1+o(1)bigr)frac{log X}{loglog X}right)$.
            – Emil Jeřábek
            2 days ago






            Using a correct formula for the volume, I get $sum_{nle X}frac1{mathrm{rad}(n)}leprod_{ple X}left(1+frac{log X}{plog p}right)$, which I believe can be bounded by $expleft(bigl(1+o(1)bigr)frac{log X}{loglog X}right)$.
            – Emil Jeřábek
            2 days ago














            Now that I see it, this might begin to explain where Gerhard Paseman got his bound.
            – Emil Jeřábek
            2 days ago




            Now that I see it, this might begin to explain where Gerhard Paseman got his bound.
            – Emil Jeřábek
            2 days ago










            up vote
            2
            down vote













            Here is another approach. Let $p_0$ be the largest prime with $(p_0)^{(e-1)p_0} leq x$. The desired sum is bounded above by $P =prod_{p}(1+lfloor log_p x rfloor/p)$, where the product is over primes $p$ less than or equal to $x$.



            When we pick out those terms of $P$ whose numerator is $k$, and consider the product of just those terms, we look at those primes with $p^k lt x leq p^{k+1}$ and the log of that product is bounded by $k$ times the sum $ S_k$ of $1/p$ over those primes. Mertens theorem gives $log((k+1)/k)$ as an approximate value for $S_k$ for small $k$, so the subproduct is approximated by $((k+1)/k)^k$. So for $k=1$ up to just before $(e-1)p_0$, we have broken the product over larger primes than $p_0$ into sub products each bounded by $e$.



            So we have an immediate upper bound on $P$ of $(1 + (log_2 x)/2)^{pi(p_0)}e^{(e-1)p_0}$. For $x$ not too small, this is less than $(log x)^{pi(p_0)}e^{(e-1)p_0}$. So far we have log of your sum is dominated by $log P$ which in turn is dominated by $(e-1)p_0 + pi(p_0)loglog x$. We want this last quantity to be asymptotically less than $epsilonlog x$.



            Well, $(e-1)p_0 leq (log x)/(log p_0)$, so $p_0 lt (log x)/f(x)$ for a function $f(x)$ which is slowly increasing. But $pi(p_0)$ is asymptotically $( (log x)/f(x))/(loglog x - log f(x))$, so the second term is only slightly bigger than $log(x)/f(x)$, but small enough to dip below $epsilonlog x$.



            If you put in some work, you find $f(x)$ is less than but close to $loglog x$, and far enough away for the fraction $(loglog x)/(loglog x - log f(x))$ not to be a problem. Although the prime number theorem and Mertens theorem on sum 1/p are used, this should be elementary enough.



            Observation 2018.11.16 Since a weak result is wanted, we can weaken some of the requirements: replace the prime number theorem by a result that bounds $pi(p)$ from above by $Ap/log p$ , and regroup the terms of the partial product $P$ into pieces each of which multiply to a number less than $e^2$. One should not need the full strength of Mertens for this. Or, follow the suggestion in the comment below and focus on the product of the biggest $pi(p_0)$ terms, and show the difference between this product and the sum is sufficiently small. End Observation 2018.11.16.



            Gerhard "For Some Value Of 'Enough'" Paseman, 2018.11.15.






            share|cite|improve this answer























            • One can also upper bound the sum by dividing it into two: one with terms where the radical includes primes bigger than p_0, and one with terms where the radical has no primes bigger than p_0. The argument above shows that the first part is less substantial than the second part. This suggests to me that looking at the second part (sum over p_0-smooth numbers) is more interesting and requires more delicacy. Gerhard "Waves Hands Over Hard Parts" Paseman, 2018.11.16.
              – Gerhard Paseman
              2 days ago















            up vote
            2
            down vote













            Here is another approach. Let $p_0$ be the largest prime with $(p_0)^{(e-1)p_0} leq x$. The desired sum is bounded above by $P =prod_{p}(1+lfloor log_p x rfloor/p)$, where the product is over primes $p$ less than or equal to $x$.



            When we pick out those terms of $P$ whose numerator is $k$, and consider the product of just those terms, we look at those primes with $p^k lt x leq p^{k+1}$ and the log of that product is bounded by $k$ times the sum $ S_k$ of $1/p$ over those primes. Mertens theorem gives $log((k+1)/k)$ as an approximate value for $S_k$ for small $k$, so the subproduct is approximated by $((k+1)/k)^k$. So for $k=1$ up to just before $(e-1)p_0$, we have broken the product over larger primes than $p_0$ into sub products each bounded by $e$.



            So we have an immediate upper bound on $P$ of $(1 + (log_2 x)/2)^{pi(p_0)}e^{(e-1)p_0}$. For $x$ not too small, this is less than $(log x)^{pi(p_0)}e^{(e-1)p_0}$. So far we have log of your sum is dominated by $log P$ which in turn is dominated by $(e-1)p_0 + pi(p_0)loglog x$. We want this last quantity to be asymptotically less than $epsilonlog x$.



            Well, $(e-1)p_0 leq (log x)/(log p_0)$, so $p_0 lt (log x)/f(x)$ for a function $f(x)$ which is slowly increasing. But $pi(p_0)$ is asymptotically $( (log x)/f(x))/(loglog x - log f(x))$, so the second term is only slightly bigger than $log(x)/f(x)$, but small enough to dip below $epsilonlog x$.



            If you put in some work, you find $f(x)$ is less than but close to $loglog x$, and far enough away for the fraction $(loglog x)/(loglog x - log f(x))$ not to be a problem. Although the prime number theorem and Mertens theorem on sum 1/p are used, this should be elementary enough.



            Observation 2018.11.16 Since a weak result is wanted, we can weaken some of the requirements: replace the prime number theorem by a result that bounds $pi(p)$ from above by $Ap/log p$ , and regroup the terms of the partial product $P$ into pieces each of which multiply to a number less than $e^2$. One should not need the full strength of Mertens for this. Or, follow the suggestion in the comment below and focus on the product of the biggest $pi(p_0)$ terms, and show the difference between this product and the sum is sufficiently small. End Observation 2018.11.16.



            Gerhard "For Some Value Of 'Enough'" Paseman, 2018.11.15.






            share|cite|improve this answer























            • One can also upper bound the sum by dividing it into two: one with terms where the radical includes primes bigger than p_0, and one with terms where the radical has no primes bigger than p_0. The argument above shows that the first part is less substantial than the second part. This suggests to me that looking at the second part (sum over p_0-smooth numbers) is more interesting and requires more delicacy. Gerhard "Waves Hands Over Hard Parts" Paseman, 2018.11.16.
              – Gerhard Paseman
              2 days ago













            up vote
            2
            down vote










            up vote
            2
            down vote









            Here is another approach. Let $p_0$ be the largest prime with $(p_0)^{(e-1)p_0} leq x$. The desired sum is bounded above by $P =prod_{p}(1+lfloor log_p x rfloor/p)$, where the product is over primes $p$ less than or equal to $x$.



            When we pick out those terms of $P$ whose numerator is $k$, and consider the product of just those terms, we look at those primes with $p^k lt x leq p^{k+1}$ and the log of that product is bounded by $k$ times the sum $ S_k$ of $1/p$ over those primes. Mertens theorem gives $log((k+1)/k)$ as an approximate value for $S_k$ for small $k$, so the subproduct is approximated by $((k+1)/k)^k$. So for $k=1$ up to just before $(e-1)p_0$, we have broken the product over larger primes than $p_0$ into sub products each bounded by $e$.



            So we have an immediate upper bound on $P$ of $(1 + (log_2 x)/2)^{pi(p_0)}e^{(e-1)p_0}$. For $x$ not too small, this is less than $(log x)^{pi(p_0)}e^{(e-1)p_0}$. So far we have log of your sum is dominated by $log P$ which in turn is dominated by $(e-1)p_0 + pi(p_0)loglog x$. We want this last quantity to be asymptotically less than $epsilonlog x$.



            Well, $(e-1)p_0 leq (log x)/(log p_0)$, so $p_0 lt (log x)/f(x)$ for a function $f(x)$ which is slowly increasing. But $pi(p_0)$ is asymptotically $( (log x)/f(x))/(loglog x - log f(x))$, so the second term is only slightly bigger than $log(x)/f(x)$, but small enough to dip below $epsilonlog x$.



            If you put in some work, you find $f(x)$ is less than but close to $loglog x$, and far enough away for the fraction $(loglog x)/(loglog x - log f(x))$ not to be a problem. Although the prime number theorem and Mertens theorem on sum 1/p are used, this should be elementary enough.



            Observation 2018.11.16 Since a weak result is wanted, we can weaken some of the requirements: replace the prime number theorem by a result that bounds $pi(p)$ from above by $Ap/log p$ , and regroup the terms of the partial product $P$ into pieces each of which multiply to a number less than $e^2$. One should not need the full strength of Mertens for this. Or, follow the suggestion in the comment below and focus on the product of the biggest $pi(p_0)$ terms, and show the difference between this product and the sum is sufficiently small. End Observation 2018.11.16.



            Gerhard "For Some Value Of 'Enough'" Paseman, 2018.11.15.






            share|cite|improve this answer














            Here is another approach. Let $p_0$ be the largest prime with $(p_0)^{(e-1)p_0} leq x$. The desired sum is bounded above by $P =prod_{p}(1+lfloor log_p x rfloor/p)$, where the product is over primes $p$ less than or equal to $x$.



            When we pick out those terms of $P$ whose numerator is $k$, and consider the product of just those terms, we look at those primes with $p^k lt x leq p^{k+1}$ and the log of that product is bounded by $k$ times the sum $ S_k$ of $1/p$ over those primes. Mertens theorem gives $log((k+1)/k)$ as an approximate value for $S_k$ for small $k$, so the subproduct is approximated by $((k+1)/k)^k$. So for $k=1$ up to just before $(e-1)p_0$, we have broken the product over larger primes than $p_0$ into sub products each bounded by $e$.



            So we have an immediate upper bound on $P$ of $(1 + (log_2 x)/2)^{pi(p_0)}e^{(e-1)p_0}$. For $x$ not too small, this is less than $(log x)^{pi(p_0)}e^{(e-1)p_0}$. So far we have log of your sum is dominated by $log P$ which in turn is dominated by $(e-1)p_0 + pi(p_0)loglog x$. We want this last quantity to be asymptotically less than $epsilonlog x$.



            Well, $(e-1)p_0 leq (log x)/(log p_0)$, so $p_0 lt (log x)/f(x)$ for a function $f(x)$ which is slowly increasing. But $pi(p_0)$ is asymptotically $( (log x)/f(x))/(loglog x - log f(x))$, so the second term is only slightly bigger than $log(x)/f(x)$, but small enough to dip below $epsilonlog x$.



            If you put in some work, you find $f(x)$ is less than but close to $loglog x$, and far enough away for the fraction $(loglog x)/(loglog x - log f(x))$ not to be a problem. Although the prime number theorem and Mertens theorem on sum 1/p are used, this should be elementary enough.



            Observation 2018.11.16 Since a weak result is wanted, we can weaken some of the requirements: replace the prime number theorem by a result that bounds $pi(p)$ from above by $Ap/log p$ , and regroup the terms of the partial product $P$ into pieces each of which multiply to a number less than $e^2$. One should not need the full strength of Mertens for this. Or, follow the suggestion in the comment below and focus on the product of the biggest $pi(p_0)$ terms, and show the difference between this product and the sum is sufficiently small. End Observation 2018.11.16.



            Gerhard "For Some Value Of 'Enough'" Paseman, 2018.11.15.







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited 2 days ago

























            answered Nov 15 at 19:58









            Gerhard Paseman

            8,19411845




            8,19411845












            • One can also upper bound the sum by dividing it into two: one with terms where the radical includes primes bigger than p_0, and one with terms where the radical has no primes bigger than p_0. The argument above shows that the first part is less substantial than the second part. This suggests to me that looking at the second part (sum over p_0-smooth numbers) is more interesting and requires more delicacy. Gerhard "Waves Hands Over Hard Parts" Paseman, 2018.11.16.
              – Gerhard Paseman
              2 days ago


















            • One can also upper bound the sum by dividing it into two: one with terms where the radical includes primes bigger than p_0, and one with terms where the radical has no primes bigger than p_0. The argument above shows that the first part is less substantial than the second part. This suggests to me that looking at the second part (sum over p_0-smooth numbers) is more interesting and requires more delicacy. Gerhard "Waves Hands Over Hard Parts" Paseman, 2018.11.16.
              – Gerhard Paseman
              2 days ago
















            One can also upper bound the sum by dividing it into two: one with terms where the radical includes primes bigger than p_0, and one with terms where the radical has no primes bigger than p_0. The argument above shows that the first part is less substantial than the second part. This suggests to me that looking at the second part (sum over p_0-smooth numbers) is more interesting and requires more delicacy. Gerhard "Waves Hands Over Hard Parts" Paseman, 2018.11.16.
            – Gerhard Paseman
            2 days ago




            One can also upper bound the sum by dividing it into two: one with terms where the radical includes primes bigger than p_0, and one with terms where the radical has no primes bigger than p_0. The argument above shows that the first part is less substantial than the second part. This suggests to me that looking at the second part (sum over p_0-smooth numbers) is more interesting and requires more delicacy. Gerhard "Waves Hands Over Hard Parts" Paseman, 2018.11.16.
            – Gerhard Paseman
            2 days ago


















             

            draft saved


            draft discarded



















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathoverflow.net%2fquestions%2f315374%2fsum-of-the-reciprocals-of-radicals%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            サソリ

            広島県道265号伴広島線

            Accessing regular linux commands in Huawei's Dopra Linux