What parameter determines the maximum value EvalSum can achieve?

I tried to encrypt the vector of all ones into CKKS ciphertext, and then multiply each element by its corresponding weight and add it together.
The idea is similar to converting an all-1 binary string into base 10.

    std::vector<double> binaryVector(batchSize, 1);
    cc->EvalSumKeyGen(keys.secretKey, keys.publicKey);
    auto pBin = cc->MakeCKKSPackedPlaintext(binaryVector);
    const Ciphertext<DCRTPoly> cBin = cc->Encrypt(keys.publicKey, pBin);
    std::vector<double> powerOfTwoVector(batchSize);

    for (size_t i = 0; i < batchSize; ++i) {
        powerOfTwoVector[i] = 1.0 * (1 << i); 
    }

So far,if set SecurityLevel sl = HE_NOSET ,when Ringdim=32, slot=16, the result is 255,255,255,… When the Ringdim and slot are larger, they become negative.
if set SecurityLevel sl = HE128; resule is wrong

Make sure you don’t confuse the batchSize with the number of slots. If you do not specify the batchSize and you leave the encoding to be fully packed, then the highest power of two you multiply with will be 2^(ringDim - 1), which can be too large, e.g. 2^4096. You should set batchSize to be the (power of two) number of bits you are interested in.

I could also identify another issue in the code.
(1<<i) is an integer data type (int32_t). If i>30, an integer overflow occurs and you won’t get what you expect.
You could use pow(2,i), which gives a higher range (of i) but make sure not to go beyond double’s capacity!

This is indeed a problem. Thank you for the reminder

In OpenFHE, which parameters determine the upper limit of the sum?

The CKKS precision would be the most effective factor I would say. Make sure intermediate values in your computation remain within a manageable range (not extremely large or small values). There should be no limit as long as intermediate values are within the CKKS precision (which is mainly determined by firstModSize, scaleModSize, and the scaling technique).