First error: Query of LOB fields caused heap usage to exceed limit





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty{ margin-bottom:0;
}






up vote
1
down vote

favorite












I simply try to execute a batch job and execute a SOQL inside the execute() method as below with having,




  • execute() method has the line List results = (List)database.query(query); , and

  • start() method queries 10000 records (SELECT Id FROM statustracking__c LIMIT :recordsLimit)[**see the note], and

  • batch size is 2000 and,

  • I do not have any long text fields/attachments or such kind of large content fields


trying to execute the above batch resulting the below error;



"Query of LOB fields caused heap usage to exceed limit."



Can someone explain/suggest;




  • why the error is thrown even the record limit is 10000?


  • are there any alternative ways that I could process 100000 except PK chunking? (this much is what I actually want to process)



"query" variable in the first point contains the below SOQL;



SELECT ownervalue__c, 
managerlevel3__c,
isdeleted,
ownerid__c,
previous_owner__c,
enddate__c,
isteammanager__c,
iscreatorloggedin__c,
id,
networkresponsevalue__c,
createddate,
createddatelist__c,
managerlevel1__c,
statusduration__c,
status__c,
fieldschanged__c,
ownername__c,
startdate__c,
assignedbyfillinbox__c,
createdbyid,
statusvalid__c,
lastmodifiedbyid,
previous_owner_id__c,
recordtypeid,
agent__c,
duration__c,
lastmodifieddate,
ischannelmanager__c,
businesstime__c,
durationmins__c,
previous_owner_name__c,
day_of_the_week__c,
networkcommentsvalue__c,
systemmodstamp,
durationhours__c,
managerlevel2__c,
creatorrole__c,
ignore__c,
fax__c,
network__c,
businesshours__c,
business_hours_status__c,
case__c,
name,
escalationtier__c,
business_hour_duration__c,
ccmanager__c,
subsequentstatusvalue__c,
start_date_hour__c,
durationseconds__c,
statusvalue__c,
case_contact_id__c,
creatorprofile__c
FROM statustracking__c
WHERE id NOT IN (SELECT statustracking__c
FROM archivelog__c
WHERE sobjecttype__c = 'StatusTracking__c'
AND ( status__c = 'Archived'
OR status__c = 'Failed To Archive'
OR status__c = 'Failed To Delete Original' ))
AND createddate < :olderThan
ORDER BY createddate ASC



NOTE: ** queries all the records with the limit 10000 and moved the filtering to the execute() method, it was before in start() method and thrown the "First error: [REQUEST_RUNNING_TOO_LONG] Your request was running for too long, and has been stopped." error.











share|improve this question




























    up vote
    1
    down vote

    favorite












    I simply try to execute a batch job and execute a SOQL inside the execute() method as below with having,




    • execute() method has the line List results = (List)database.query(query); , and

    • start() method queries 10000 records (SELECT Id FROM statustracking__c LIMIT :recordsLimit)[**see the note], and

    • batch size is 2000 and,

    • I do not have any long text fields/attachments or such kind of large content fields


    trying to execute the above batch resulting the below error;



    "Query of LOB fields caused heap usage to exceed limit."



    Can someone explain/suggest;




    • why the error is thrown even the record limit is 10000?


    • are there any alternative ways that I could process 100000 except PK chunking? (this much is what I actually want to process)



    "query" variable in the first point contains the below SOQL;



    SELECT ownervalue__c, 
    managerlevel3__c,
    isdeleted,
    ownerid__c,
    previous_owner__c,
    enddate__c,
    isteammanager__c,
    iscreatorloggedin__c,
    id,
    networkresponsevalue__c,
    createddate,
    createddatelist__c,
    managerlevel1__c,
    statusduration__c,
    status__c,
    fieldschanged__c,
    ownername__c,
    startdate__c,
    assignedbyfillinbox__c,
    createdbyid,
    statusvalid__c,
    lastmodifiedbyid,
    previous_owner_id__c,
    recordtypeid,
    agent__c,
    duration__c,
    lastmodifieddate,
    ischannelmanager__c,
    businesstime__c,
    durationmins__c,
    previous_owner_name__c,
    day_of_the_week__c,
    networkcommentsvalue__c,
    systemmodstamp,
    durationhours__c,
    managerlevel2__c,
    creatorrole__c,
    ignore__c,
    fax__c,
    network__c,
    businesshours__c,
    business_hours_status__c,
    case__c,
    name,
    escalationtier__c,
    business_hour_duration__c,
    ccmanager__c,
    subsequentstatusvalue__c,
    start_date_hour__c,
    durationseconds__c,
    statusvalue__c,
    case_contact_id__c,
    creatorprofile__c
    FROM statustracking__c
    WHERE id NOT IN (SELECT statustracking__c
    FROM archivelog__c
    WHERE sobjecttype__c = 'StatusTracking__c'
    AND ( status__c = 'Archived'
    OR status__c = 'Failed To Archive'
    OR status__c = 'Failed To Delete Original' ))
    AND createddate < :olderThan
    ORDER BY createddate ASC



    NOTE: ** queries all the records with the limit 10000 and moved the filtering to the execute() method, it was before in start() method and thrown the "First error: [REQUEST_RUNNING_TOO_LONG] Your request was running for too long, and has been stopped." error.











    share|improve this question
























      up vote
      1
      down vote

      favorite









      up vote
      1
      down vote

      favorite











      I simply try to execute a batch job and execute a SOQL inside the execute() method as below with having,




      • execute() method has the line List results = (List)database.query(query); , and

      • start() method queries 10000 records (SELECT Id FROM statustracking__c LIMIT :recordsLimit)[**see the note], and

      • batch size is 2000 and,

      • I do not have any long text fields/attachments or such kind of large content fields


      trying to execute the above batch resulting the below error;



      "Query of LOB fields caused heap usage to exceed limit."



      Can someone explain/suggest;




      • why the error is thrown even the record limit is 10000?


      • are there any alternative ways that I could process 100000 except PK chunking? (this much is what I actually want to process)



      "query" variable in the first point contains the below SOQL;



      SELECT ownervalue__c, 
      managerlevel3__c,
      isdeleted,
      ownerid__c,
      previous_owner__c,
      enddate__c,
      isteammanager__c,
      iscreatorloggedin__c,
      id,
      networkresponsevalue__c,
      createddate,
      createddatelist__c,
      managerlevel1__c,
      statusduration__c,
      status__c,
      fieldschanged__c,
      ownername__c,
      startdate__c,
      assignedbyfillinbox__c,
      createdbyid,
      statusvalid__c,
      lastmodifiedbyid,
      previous_owner_id__c,
      recordtypeid,
      agent__c,
      duration__c,
      lastmodifieddate,
      ischannelmanager__c,
      businesstime__c,
      durationmins__c,
      previous_owner_name__c,
      day_of_the_week__c,
      networkcommentsvalue__c,
      systemmodstamp,
      durationhours__c,
      managerlevel2__c,
      creatorrole__c,
      ignore__c,
      fax__c,
      network__c,
      businesshours__c,
      business_hours_status__c,
      case__c,
      name,
      escalationtier__c,
      business_hour_duration__c,
      ccmanager__c,
      subsequentstatusvalue__c,
      start_date_hour__c,
      durationseconds__c,
      statusvalue__c,
      case_contact_id__c,
      creatorprofile__c
      FROM statustracking__c
      WHERE id NOT IN (SELECT statustracking__c
      FROM archivelog__c
      WHERE sobjecttype__c = 'StatusTracking__c'
      AND ( status__c = 'Archived'
      OR status__c = 'Failed To Archive'
      OR status__c = 'Failed To Delete Original' ))
      AND createddate < :olderThan
      ORDER BY createddate ASC



      NOTE: ** queries all the records with the limit 10000 and moved the filtering to the execute() method, it was before in start() method and thrown the "First error: [REQUEST_RUNNING_TOO_LONG] Your request was running for too long, and has been stopped." error.











      share|improve this question













      I simply try to execute a batch job and execute a SOQL inside the execute() method as below with having,




      • execute() method has the line List results = (List)database.query(query); , and

      • start() method queries 10000 records (SELECT Id FROM statustracking__c LIMIT :recordsLimit)[**see the note], and

      • batch size is 2000 and,

      • I do not have any long text fields/attachments or such kind of large content fields


      trying to execute the above batch resulting the below error;



      "Query of LOB fields caused heap usage to exceed limit."



      Can someone explain/suggest;




      • why the error is thrown even the record limit is 10000?


      • are there any alternative ways that I could process 100000 except PK chunking? (this much is what I actually want to process)



      "query" variable in the first point contains the below SOQL;



      SELECT ownervalue__c, 
      managerlevel3__c,
      isdeleted,
      ownerid__c,
      previous_owner__c,
      enddate__c,
      isteammanager__c,
      iscreatorloggedin__c,
      id,
      networkresponsevalue__c,
      createddate,
      createddatelist__c,
      managerlevel1__c,
      statusduration__c,
      status__c,
      fieldschanged__c,
      ownername__c,
      startdate__c,
      assignedbyfillinbox__c,
      createdbyid,
      statusvalid__c,
      lastmodifiedbyid,
      previous_owner_id__c,
      recordtypeid,
      agent__c,
      duration__c,
      lastmodifieddate,
      ischannelmanager__c,
      businesstime__c,
      durationmins__c,
      previous_owner_name__c,
      day_of_the_week__c,
      networkcommentsvalue__c,
      systemmodstamp,
      durationhours__c,
      managerlevel2__c,
      creatorrole__c,
      ignore__c,
      fax__c,
      network__c,
      businesshours__c,
      business_hours_status__c,
      case__c,
      name,
      escalationtier__c,
      business_hour_duration__c,
      ccmanager__c,
      subsequentstatusvalue__c,
      start_date_hour__c,
      durationseconds__c,
      statusvalue__c,
      case_contact_id__c,
      creatorprofile__c
      FROM statustracking__c
      WHERE id NOT IN (SELECT statustracking__c
      FROM archivelog__c
      WHERE sobjecttype__c = 'StatusTracking__c'
      AND ( status__c = 'Archived'
      OR status__c = 'Failed To Archive'
      OR status__c = 'Failed To Delete Original' ))
      AND createddate < :olderThan
      ORDER BY createddate ASC



      NOTE: ** queries all the records with the limit 10000 and moved the filtering to the execute() method, it was before in start() method and thrown the "First error: [REQUEST_RUNNING_TOO_LONG] Your request was running for too long, and has been stopped." error.








      apex soql batch limits heap






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked 2 days ago









      Hasantha

      1,1991834




      1,1991834






















          1 Answer
          1






          active

          oldest

          votes

















          up vote
          3
          down vote













          The query you are executing in your execute() method does not appear to contain any reference to or filter based upon the scope parameter received by execute().



          What this means is that in terms of executing this query, what you do in your start() method and what you set your batch size to are both irrelevant. You're just running this query in an asynchronous context apropos of nothing, and you're hitting a limit - either the heap limit or the query processing time limit.



          Absent further details on your data model and your code, the query looks to me like a major performance problem. While CreatedDate is indexed, your NOT IN subquery looks like it is probably very inefficient, particularly if - but not only if - ArchiveLog__c.SObjectType__c and Status__c aren't indexed and the data volume in that object is large. And you are pulling back a very large number of fields on potentially a very large number of records.



          I would recommend you do one of two things:




          • Construct a more idiomatic batch class, where you execute a single query in your start() method and process each batch in execute() without trying to perform additional broad-based queries. If possible given the full details of your data model and objective, tune your query to make this possible by using indexed fields, avoiding negative criteria, and following other best practices for large data volumes. The Query Plan tool may help you achieve this.

          • If you are not able to do so, ensure that the query you perform in your execute() method is filtered by the Ids of the records in the scope for this batch. This will allow you to dramatically limit the bounds of each execute() query in terms of both execution time, via an Id IN :idSet type filter, and in terms of the heap space consumed. Note that pursuing this route means that your batch class will ultimately have to traverse all of the records of StatusTracking__c in your database, which may make the overall performance of the process unacceptable.

          • In either route, tune your batch size to make each invocation of execute() realistically completable within a single transaction. 2000 is a very large batch. If you encounter limits issues within a single batch, turn the batch size down until you're able to complete them successfully.


          PK chunking is a technique you can use to execute very large scale queries with the Bulk API, not via Batch Apex.






          share|improve this answer























            Your Answer








            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "459"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














             

            draft saved


            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsalesforce.stackexchange.com%2fquestions%2f239779%2ffirst-error-query-of-lob-fields-caused-heap-usage-to-exceed-limit%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            3
            down vote













            The query you are executing in your execute() method does not appear to contain any reference to or filter based upon the scope parameter received by execute().



            What this means is that in terms of executing this query, what you do in your start() method and what you set your batch size to are both irrelevant. You're just running this query in an asynchronous context apropos of nothing, and you're hitting a limit - either the heap limit or the query processing time limit.



            Absent further details on your data model and your code, the query looks to me like a major performance problem. While CreatedDate is indexed, your NOT IN subquery looks like it is probably very inefficient, particularly if - but not only if - ArchiveLog__c.SObjectType__c and Status__c aren't indexed and the data volume in that object is large. And you are pulling back a very large number of fields on potentially a very large number of records.



            I would recommend you do one of two things:




            • Construct a more idiomatic batch class, where you execute a single query in your start() method and process each batch in execute() without trying to perform additional broad-based queries. If possible given the full details of your data model and objective, tune your query to make this possible by using indexed fields, avoiding negative criteria, and following other best practices for large data volumes. The Query Plan tool may help you achieve this.

            • If you are not able to do so, ensure that the query you perform in your execute() method is filtered by the Ids of the records in the scope for this batch. This will allow you to dramatically limit the bounds of each execute() query in terms of both execution time, via an Id IN :idSet type filter, and in terms of the heap space consumed. Note that pursuing this route means that your batch class will ultimately have to traverse all of the records of StatusTracking__c in your database, which may make the overall performance of the process unacceptable.

            • In either route, tune your batch size to make each invocation of execute() realistically completable within a single transaction. 2000 is a very large batch. If you encounter limits issues within a single batch, turn the batch size down until you're able to complete them successfully.


            PK chunking is a technique you can use to execute very large scale queries with the Bulk API, not via Batch Apex.






            share|improve this answer



























              up vote
              3
              down vote













              The query you are executing in your execute() method does not appear to contain any reference to or filter based upon the scope parameter received by execute().



              What this means is that in terms of executing this query, what you do in your start() method and what you set your batch size to are both irrelevant. You're just running this query in an asynchronous context apropos of nothing, and you're hitting a limit - either the heap limit or the query processing time limit.



              Absent further details on your data model and your code, the query looks to me like a major performance problem. While CreatedDate is indexed, your NOT IN subquery looks like it is probably very inefficient, particularly if - but not only if - ArchiveLog__c.SObjectType__c and Status__c aren't indexed and the data volume in that object is large. And you are pulling back a very large number of fields on potentially a very large number of records.



              I would recommend you do one of two things:




              • Construct a more idiomatic batch class, where you execute a single query in your start() method and process each batch in execute() without trying to perform additional broad-based queries. If possible given the full details of your data model and objective, tune your query to make this possible by using indexed fields, avoiding negative criteria, and following other best practices for large data volumes. The Query Plan tool may help you achieve this.

              • If you are not able to do so, ensure that the query you perform in your execute() method is filtered by the Ids of the records in the scope for this batch. This will allow you to dramatically limit the bounds of each execute() query in terms of both execution time, via an Id IN :idSet type filter, and in terms of the heap space consumed. Note that pursuing this route means that your batch class will ultimately have to traverse all of the records of StatusTracking__c in your database, which may make the overall performance of the process unacceptable.

              • In either route, tune your batch size to make each invocation of execute() realistically completable within a single transaction. 2000 is a very large batch. If you encounter limits issues within a single batch, turn the batch size down until you're able to complete them successfully.


              PK chunking is a technique you can use to execute very large scale queries with the Bulk API, not via Batch Apex.






              share|improve this answer

























                up vote
                3
                down vote










                up vote
                3
                down vote









                The query you are executing in your execute() method does not appear to contain any reference to or filter based upon the scope parameter received by execute().



                What this means is that in terms of executing this query, what you do in your start() method and what you set your batch size to are both irrelevant. You're just running this query in an asynchronous context apropos of nothing, and you're hitting a limit - either the heap limit or the query processing time limit.



                Absent further details on your data model and your code, the query looks to me like a major performance problem. While CreatedDate is indexed, your NOT IN subquery looks like it is probably very inefficient, particularly if - but not only if - ArchiveLog__c.SObjectType__c and Status__c aren't indexed and the data volume in that object is large. And you are pulling back a very large number of fields on potentially a very large number of records.



                I would recommend you do one of two things:




                • Construct a more idiomatic batch class, where you execute a single query in your start() method and process each batch in execute() without trying to perform additional broad-based queries. If possible given the full details of your data model and objective, tune your query to make this possible by using indexed fields, avoiding negative criteria, and following other best practices for large data volumes. The Query Plan tool may help you achieve this.

                • If you are not able to do so, ensure that the query you perform in your execute() method is filtered by the Ids of the records in the scope for this batch. This will allow you to dramatically limit the bounds of each execute() query in terms of both execution time, via an Id IN :idSet type filter, and in terms of the heap space consumed. Note that pursuing this route means that your batch class will ultimately have to traverse all of the records of StatusTracking__c in your database, which may make the overall performance of the process unacceptable.

                • In either route, tune your batch size to make each invocation of execute() realistically completable within a single transaction. 2000 is a very large batch. If you encounter limits issues within a single batch, turn the batch size down until you're able to complete them successfully.


                PK chunking is a technique you can use to execute very large scale queries with the Bulk API, not via Batch Apex.






                share|improve this answer














                The query you are executing in your execute() method does not appear to contain any reference to or filter based upon the scope parameter received by execute().



                What this means is that in terms of executing this query, what you do in your start() method and what you set your batch size to are both irrelevant. You're just running this query in an asynchronous context apropos of nothing, and you're hitting a limit - either the heap limit or the query processing time limit.



                Absent further details on your data model and your code, the query looks to me like a major performance problem. While CreatedDate is indexed, your NOT IN subquery looks like it is probably very inefficient, particularly if - but not only if - ArchiveLog__c.SObjectType__c and Status__c aren't indexed and the data volume in that object is large. And you are pulling back a very large number of fields on potentially a very large number of records.



                I would recommend you do one of two things:




                • Construct a more idiomatic batch class, where you execute a single query in your start() method and process each batch in execute() without trying to perform additional broad-based queries. If possible given the full details of your data model and objective, tune your query to make this possible by using indexed fields, avoiding negative criteria, and following other best practices for large data volumes. The Query Plan tool may help you achieve this.

                • If you are not able to do so, ensure that the query you perform in your execute() method is filtered by the Ids of the records in the scope for this batch. This will allow you to dramatically limit the bounds of each execute() query in terms of both execution time, via an Id IN :idSet type filter, and in terms of the heap space consumed. Note that pursuing this route means that your batch class will ultimately have to traverse all of the records of StatusTracking__c in your database, which may make the overall performance of the process unacceptable.

                • In either route, tune your batch size to make each invocation of execute() realistically completable within a single transaction. 2000 is a very large batch. If you encounter limits issues within a single batch, turn the batch size down until you're able to complete them successfully.


                PK chunking is a technique you can use to execute very large scale queries with the Bulk API, not via Batch Apex.







                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited 2 days ago

























                answered 2 days ago









                David Reed

                25.7k51644




                25.7k51644






























                     

                    draft saved


                    draft discarded



















































                     


                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsalesforce.stackexchange.com%2fquestions%2f239779%2ffirst-error-query-of-lob-fields-caused-heap-usage-to-exceed-limit%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Accessing regular linux commands in Huawei's Dopra Linux

                    Can't connect RFCOMM socket: Host is down

                    Kernel panic - not syncing: Fatal Exception in Interrupt