Simulating LossesΒΆ

In this example we will expand the previous basic example by simulating some loss of the encoded data. This can be done simply by not “transmitting” some encoded symbols to the decoder. The complete example is shown below.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
// Copyright Steinwurf ApS 2016.
// Distributed under the "STEINWURF RESEARCH LICENSE 1.0".
// See accompanying file LICENSE.rst or
// http://www.steinwurf.com/licensing

#include <cstdint>
#include <cstdlib>
#include <ctime>
#include <algorithm>
#include <iostream>
#include <vector>

#include <kodocpp/kodocpp.hpp>

int main()
{
    // Seed the random number generator to produce different data every time
    srand((uint32_t)time(0));

    // Set the number of symbols (i.e. the generation size in RLNC
    // terminology) and the size of a symbol in bytes
    uint32_t max_symbols = 16;
    uint32_t max_symbol_size = 1400;

    // In the following we will make an encoder/decoder factory.
    // The factories are used to build actual encoders/decoders
    kodocpp::encoder_factory encoder_factory(
        kodocpp::codec::full_vector,
        kodocpp::field::binary8,
        max_symbols,
        max_symbol_size);

    kodocpp::encoder encoder = encoder_factory.build();

    kodocpp::decoder_factory decoder_factory(
        kodocpp::codec::full_vector,
        kodocpp::field::binary8,
        max_symbols,
        max_symbol_size);

    kodocpp::decoder decoder = decoder_factory.build();

    std::vector<uint8_t> payload(encoder.payload_size());
    std::vector<uint8_t> data_in(encoder.block_size());
    // Just for fun - fill the data with random data
    std::generate(data_in.begin(), data_in.end(), rand);
    // Assign the data buffer to the encoder so that we may start
    // to produce encoded symbols from it
    encoder.set_const_symbols(data_in.data(), encoder.block_size());
    // Create a buffer which will contain the decoded data, and we assign
    // that buffer to the decoder
    std::vector<uint8_t> data_out(decoder.block_size());
    decoder.set_mutable_symbols(data_out.data(), decoder.block_size());

    //! [0]
    uint32_t encoded_count = 0;
    uint32_t dropped_count = 0;

    while (!decoder.is_complete())
    {
        // Encode a packet into the payload buffer
        uint32_t bytes_used = encoder.write_payload(payload.data());
        std::cout << "Bytes used = " << bytes_used << std::endl;

        ++encoded_count;

        if (rand() % 2)
        {
            ++dropped_count;
            continue;
        }

        // Pass that packet to the decoder
        decoder.read_payload(payload.data());
    }

    std::cout << "Encoded count = " << encoded_count << std::endl;
    std::cout << "Dropped count = " << dropped_count << std::endl;
    //! [1]

    // Check if we properly decoded the data
    if (data_in == data_out)
    {
        std::cout << "Data decoded correctly" << std::endl;
    }

    return 0;
}

As the attentive reader might notice, only the coding loop is changed from the basic example.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
    uint32_t encoded_count = 0;
    uint32_t dropped_count = 0;

    while (!decoder.is_complete())
    {
        // Encode a packet into the payload buffer
        uint32_t bytes_used = encoder.write_payload(payload.data());
        std::cout << "Bytes used = " << bytes_used << std::endl;

        ++encoded_count;

        if (rand() % 2)
        {
            ++dropped_count;
            continue;
        }

        // Pass that packet to the decoder
        decoder.read_payload(payload.data());
    }

    std::cout << "Encoded count = " << encoded_count << std::endl;
    std::cout << "Dropped count = " << dropped_count << std::endl;

The change is fairly simple. We introduce a 50% loss using rand() % 2 and add a variable dropped_count to keep track of the dropped symbols.

The encoder can, in theory, create an infinite number of coded symbols, because we are using a rateless code. This means that as long as the loss is below 100%, the decoder will be able to finish the decoding.

A graphical representation of the setup is seen in the figure below.

../../../_images/tutorial_add_loss.svg

The output of this example will be something like this (the output will change as we seed the random generator with the time):

Bytes used = 1405
Bytes used = 1405
Bytes used = 1405
Bytes used = 1405
Bytes used = 1405
Bytes used = 1405
Bytes used = 1405
Bytes used = 1405
Bytes used = 1405
Bytes used = 1405
Bytes used = 1405
Bytes used = 1405
Bytes used = 1405
Bytes used = 1405
Bytes used = 1405
Bytes used = 1405
Bytes used = 1417
Bytes used = 1417
Bytes used = 1417
Bytes used = 1417
Bytes used = 1417
Bytes used = 1417
Bytes used = 1417
Bytes used = 1417
Bytes used = 1417
Bytes used = 1417
Bytes used = 1417
Encoded count = 27
Dropped count = 11

An interesting thing to notice is the number of bytes used. It increases slightly after the encoder has encoded 16 symbols (the same number as the number of symbols in the generation). This is because the encoder exits the systematic phase where it sends the symbols uncoded. This technique will be explained in following example.