Blog

Blog je mesto gde možeš da čitaš o navikama IT-evaca, najavama IT dešavanja, aktuelnostima na tržištu, savetima i cakama kako da uspeš na ovom dinamičnom polju.
Mi pratimo trendove, na tebi je da se zavališ u fotelju i čitaš :)

Tag: JSON (19 rezultata)
24.05.2024. ·
1 min

Programeri kreiraju programske jezike u tridesetim godinama

Većina programskih jezika koje danas koristimo kreirana je od strane programera u njihovim tridesetim godinama. Srednja starost kreatora programskih jezika je 36 godina, dok je prosečna starost 37,5 godina. Ovaj trend pokazuje da su programeri najproduktivniji kada su stekli dovoljno iskustva, ali su i dalje puni inovativnih ideja, pokazuju podaci koje je prikupio Breck Yunits. Najmlađi kreator uspešnog programskog jezika imao je 16 godina, dok je najstariji imao 70 godina. Ken Ajverson je kreirao jezik J u 70. godini, što dokazuje da nikada nije kasno za inovacije. S druge strane, Aron Švarc je stvorio atx sa samo 16 godina, što je kasnije dovelo do njegovog rada na Markdownu sa Džonom Gruberom. Programski jezici kao što su TypeScript, Go, JSON i Clojure kreirani su od strane programera u četrdesetim i pedesetim godinama. Ovo ukazuje da i zreliji programeri doprinose značajnim inovacijama u svetu tehnologije. Interesantno je da niko mlađi od 20 godina nikada nije stvorio popularan programski jezik. Rasmus Lerdorf je kreirao PHP sa 27 godina, dok je Ričard Stalman stvorio Emacs sa 23 godine. Ovo pokazuje da, iako mladi programeri mogu biti inovativni, najuspešniji jezici dolaze iz ruku iskusnijih developera. Kreiranje programskih jezika može biti zabavno i korisno, bez obzira na godine. Inovacija ne poznaje starosne granice, a primeri iz prošlosti pokazuju da je moguće stvoriti nešto značajno u bilo kom periodu života.

HelloWorld
0
23.05.2024. ·
2 min

SQL slavi 50 godina: Zašto je i dalje najvažniji jezik za baze podataka?

Structured Query Language (SQL) ove godine slavi svoj 50. rođendan. SQL je 1974. godine predstavljen od strane Donalda Čemberlina i Rejmonda Bojsa kao SEQUEL, ali je naziv kasnije promenjen zbog autorskih prava. Od tada, SQL je postao standard u svetu baza podataka, a njegova popularnost ne jenjava ni posle pola veka. SQL je danas treći najpopularniji programski jezik među profesionalnim programerima, prema podacima sa Stack Overflow-a, dok je IEEE proglasio SQL najvažnijim jezikom za dobijanje posla. Ovo je delom zbog njegove primene u oblastima kao što su veštačka inteligencija, analitika i razvoj softvera. Za razliku od drugih starih jezika poput COBOL-a i FORTRAN-a, koji se koriste uglavnom u postojećim legacy sistemima, SQL je još uvek ključan za nove projekte i inovacije. SQL omogućava lako upravljanje i interakciju sa podacima, što ga čini neizostavnim u mnogim poslovnim procesima. Jedan od razloga za dugovečnost SQL-a je njegova sposobnost da se prilagodi novim tehnologijama. SQL je dodao podršku za GIS podatke, JSON dokumente, kao i za XML i YAML. Takođe, može se kombinovati sa vektorskim podacima, što omogućava razvoj generativnih AI aplikacija. Pored svoje fleksibilnosti, SQL se zasniva na snažnoj matematičkoj teoriji, što ga čini pouzdanim i efikasnim. SQL je prvi programski jezik koji je omogućio vraćanje više redova po jednom upitu, što olakšava analizu i korišćenje podataka u poslovne svrhe. Iako su postojali pokušaji da se SQL zameni drugim tehnologijama, kao što su NoSQL baze podataka i prirodni jezički procesori, SQL je i dalje nezaobilazan. Čak i generativna veštačka inteligencija, koja može pisati SQL kod umesto programera, zavisi od SQL-a za interakciju sa podacima. SQL će nastaviti da igra ključnu ulogu u IT sistemima, bez obzira na to što možda postane manje vidljiv za developere. Sa sve većim oslanjanjem na podatke u IT industriji, SQL će i dalje biti neophodan za funkcionisanje brojnih sistema.

HelloWorld
0
17.04.2024. ·
5 min

Node.js Lambda Package Optimization: Decrease Size and Increase Performance Using ES Modules

This article explains how to optimize Node.js AWS Lambda functions packaged in ES module format. It also shows an example with bundling and AWS CDK, as well as results for gained performance improvement. Node.js has two formats for organizing and packaging the code: CommonJS (CJS) — legacy, slower, larger, and ES modules (ESM) — modern, faster, smaller. CJS is still a default module system, and sometimes the only supported option for some tools. Let’s say you have a Node.js project, but you haven’t bothered with this before. You may now ask yourself — in which format is my code packaged? Let’s look at some JavaScript code examples: In JavaScript, it is clear by just looking into the code. But in TypeScript, you may find yourself writing code in ESM syntax, but using CJS in runtime! This happens if TypeScript compiler is configured to produce CommonJS output format. Compiler settings can be adjusted in tsconfig.json file and we will show how to avoid CJS output with an example later. There are two ways for Node.js to determine package format. The first way is to look up for the nearest package.json file and its type property. We can set it to module if we want to treat all .js files in the project as ES modules. We can also omit type property or put it to commonjs, if we want to have code packaged in CommonJS format. The second way to configure this is using file extensions. Files ending with .mjs (for ES modules) or .cjs (for CommonJS) will override package.json type and force the specified format. Files ending with just .js will inherit chosen package format. ES modules So how exactly can ESM help us improve Lambda performance? ES modules support features like static code analysis and tree shaking, which means it’s possible to optimize code before the runtime. This can help to eliminate dead code and remove not needed dependencies, which reduces the package size. You can benefit from this in terms of cold start latency. Function size impacts the time needed to load the Lambda, so we want to reduce it as much as possible. Lambda functions support ES modules from Node.js 14.x runtime. Example Let’s take one simple TypeScript project as an example, to show what we need to configure to declare a project as an ES module. We will add just couple of dependencies including aws-sdk for DynamoDB, Logger from Lambda Powertools and Lambda type definitions. The type field in package.json defines the package format for Node.js. We are using module value to target ES modules. The module property in tsconfig.json sets the output format for TypeScript compiler. In this case, ES2022 value says that we are compiling our code to one of the versions of ES modules for JavaScript. You can find additional info for compiler settings on https://www.typescriptlang.org/tsconfig. Bundling To simplify deploy and runtime process, you can use a tool called bundler to combine your application code and dependencies into a single JavaScript file. This procedure is used in frontend applications and browsers, but it’s handy for Lambda as well. Bundlers are also able to use previously mentioned ES modules features, which is the reason why they are important part of this optimization. Some of the popular ones are: esbuild, webpack, rollup, etc. AWS CDK If you’re using CDK to create your cloud infrastructure, good news is that built-in NodejsFunction construct uses esbuild under the hood. It also allows you to configure bundler properties, so you can parametrize the process for your needs. With these settings, bundler will prioritize ES module version of dependencies over CommonJS. But not all 3rd party libraries have a support for ES modules, so in those cases we must use their CommonJS version. ➤ What’s important to mention is that if you have an existing CommonJS project, you can keep it as is and still make use of this improvement. The only thing you need to add is mainFields property in CDK bundling section, which will set the format order when resolving a package. This might help you if you have some troubles switching the project completely over to ES modules. Let’s use a simple function that connects to DynamoDB as an example. Its job is just to read a record from a database. We will create two Lambda functions with this same code. One using the CDK example above, and the other one using the same CDK but without ESM bundling properties. It is just to have separate functions in CommonJS and ES modules so it’s easier to compare them. Here is a bundling output during CDK deploy with esbuild: You can see that ESM version of the function has package size reduced by almost 50%! Source maps file (.map) is also smaller now. esbuild provides a page for visualizing the contents of your bundle through several charts, which helps you understand what your package consists of. It is available here: https://esbuild.github.io/analyze. Here is how it looks like for our test functions: In this case, CommonJS package is improved by bundler only by minifying the code, which got it down to 500kb. Packages under @aws-sdk take up more than half of the package. But with using ES module — first approach when bundling, the package size goes down even further. As you can see, there is still some code in CJS format as some dependencies are only available as CommonJS. Performance results Let’s see now how much improvement is made by comparing cold start latency between ES module and CommonJS version of the function. Small load test with up to 10 concurrent users was executed to obtain the metrics. Below are visualized results using CloudWatch Logs Insights. CommonJS ES modules Numbers above are in milliseconds, so in average we reduced cold start duration by 50+ms, or 17%. Bigger difference is for minimum latency, which was shorter for almost 70ms, or 26%. These are not drastic differences, but from my experience with real-world projects — package size can go down like 10x, and cold start latency by even 300–400ms. Conclusion The improvement from using ES modules can be seen even in the simple example above. How much you can lower cold start latency depends on how big your function is and if it needs a lot of dependencies to do its job. But that’s the way it should be, right? For example, for simple functions that just send a message to SQS/SNS and similar, we don’t need dependencies from the rest of the app — like database or Redis client, which might be heavy. And sometimes shared code ends up all over the place. Even if the improvement in your case is not that big, it still might be worth considering using ESM. Just be aware, some tools and frameworks still have bad or no support for ESM. In the end, why would you want to pack and deploy the code you won’t use, anyway? 😄 Author: Marko Jevtović Software Developer @Levi9Serbia

28.11.2023. ·
2 min

Google protiv blokatora reklama

Google se drži plana za koji se misli da će napraviti velike probleme Chrome ekstenzijama sledeće godine. Popularan tip Chrome ekstenzija su ad blockeri, odnosno blokatori reklama, i za njih se smatra da će biti najviše pogođeni. O čemu se zapravo radi? Google je najavio da će ukinuti Manifest V2 u junu 2024. godine i preći na Manifest V3, koji će biti najnovija specifikacija za Chrome ekstenzije. Manifesti V2 i V3 su zapravo pravila koja developeri ekstenzija moraju da ispoštuju ako žele da njihove ekstenzije budu prihvaćene u Chrome Web Store. Manifest V2 je već zastario i Chrome Web Store više ne prihvata Manifest V2 ekstenzije, ali browseri ih i dalje mogu koristiti, za sada. Ovo je deo Google-ove strategije za povećanje prihoda od reklama. Kompanija Alphabet je u 2022. godini prihodovala 280 milijardi dolara, a 224 milijarde je bio prihod od reklama. Ono što je problem jeste da je YouTube doprineo sa samo 29 milijardi odnosno tek 11%, što je mali procenat jer moramo uzeti u obzir da je to druga najpopularnija platforma na internetu. Manje popularni od YouTube-a su Instagram i Facebook, a njihov prihod od reklama je značajno veći. Zanimljivo je da je FBI pre godinu dana preporučio korištenje ad blocker-a kao zaštitu od cyber kriminalaca koji koriste servise za reklamiranje da bi lažno predstavljali legitimna preduzeća i krali informacije od korisnika. Razlika između Manifesta V2 i Manifesta V3. Da bi razvili ekstenziju moramo imati manifest.json fajl. Kada pogledamo manifest.json fajl za uBlock možemo vidjeti da on koristi webRequest i webRequestBlocking API-je. Ovi API-ji omogućavaju ekstenziji da jednostavno presretne zahtev sa interneta i modifikuje ga tako da ne prikazuje reklame. Razlika u Manifestu V3 jeste da umesto ovih API-ja imamo chrome.declarativeNetRequest API koji ima mnogo veća ograničenja kad se radi o dinamičkom filtriranju sadržaja. Šta bi mogao biti epilog ove priče? Vrlo verovatno će se vremenom pojaviti sofisticiraniji ad blocker koji će biti u skladu sa Manifestom V3 ali je sigurno da će onda i Google pokušati da dodatno ograniči mogućnosti ad blocker-a i ekstenzija. Tekst pisali Aleksandar Lukač i Sergej Soldat.

07.09.2023. ·
7 min

How I made the Spring Boot startup analyser

It's no secret that Spring applications can sometimes freeze at startup. This is especially noticeable as a project develops: the new service starts quickly and shows great responsiveness, then it acquires some serious functionality, and the final distribution package swells by dozens of megabytes. Now, to simply launch that service locally, you have to wait for half a minute, a minute, two... At such moments of waiting, the developer may ponder: why on Earth is it taking so long? What’s going on? Maybe I shouldn't have added that particular library? Hi, my name is Alexey Lapin, and I am a Lead Developer at Luxoft. In this article, I’ll talk about a web application for analysing the startup phase of Spring Boot services, which uses data from the startup actuator endpoint. This tool may help answer the questions above. Foreword I made this application for myself to understand a new Spring module that I hadn't seen before and practice on the front end. I saw various solutions on the internet, but they either did not work or have not been updated for a long time, and I wanted to create an up-to-date auxiliary tool for the Spring Boot functionality. Spring Boot Startup Endpoint Starting with version 2.4, Spring Boot has an ApplicationStartup metric that records events (steps) that occurred during the service startup and an “actuator endpoint” that makes a list of these events. Here's what it looks like: {     "springBootVersion": "2.5.3",     "timeline": {         "startTime": "2021-09-06T13:38:05.049490700Z",         "events": [             {                 "endTime": "2021-09-06T13:38:05.159435400Z",                 "duration": "PT0.0898001S",                 "startTime": "2021-09-06T13:38:05.069635300Z",                 "startupStep": {                     "name": "spring.boot.application.starting",                     "id": 0,                     "tags": [                         {                             "key": "mainApplicationClass",                             "value": "com.github.al.realworld.App"                         }                     ],                     "parentId": null                 }             },             ...             {                 "endTime": "2021-09-06T13:38:06.420231Z",                 "duration": "PT0.0060049S",                 "startTime": "2021-09-06T13:38:06.414226100Z",                 "startupStep": {                     "name": "spring.beans.instantiate",                     "id": 7,                     "tags": [                         {                             "key": "beanName",                             "value": "org.springframework.boot.autoconfigure.internalCachingMetadataReaderFactory"                         }                     ],                     "parentId": 6                 }             },             ...         ] ….} } A detailed description of all message fields can be found in the Spring Boot Actuator documentation, but I think it’s all in all pretty straightforward. The event has an “id” and a “parentId”, which allows one to have a tree view. There is also a “duration” field, which shows the time spent on the event + the duration of all associated events combined. The “tags” field contains a list of event attributes, such as the name or class of the generated bean. To enable the collection of data on load events, you must pass an instance of the BufferingApplicationStartup class to the setApplicationStartup method of SpringApplication. In this case, a constructor is used that accepts the number of events to record. All events above this limit will be ignored and will not be included in the startup endpoint’s output. @SpringBootApplication public class App {     public static void main(String[] args) {         SpringApplication application = new SpringApplication(App.class);         application.setApplicationStartup(new BufferingApplicationStartup(1000));         application.run(args);     } } By default, this endpoint has a path of /actuator/startup and supports GET methods for receiving events and POST for receiving events and clearing the buffer, so subsequent calls to this endpoint will return an empty list of events Okay, let's go. We will consider the information provided by the startup endpoint as our data for analysis. The analyser web application is a single-page application (SPA) without a back end. It works like magic: you just need to upload the events that occurred during the service startup, and it will visualise them. The uploaded data is neither transferred nor stored anywhere. I chose Typescript as my go-to programming language, as it seemed like a better option for a Java developer compared to Javascript due to its strong typing and object-oriented programming features. I found it very easy to switch from Java to Typescript and quickly write a working code. As my UI framework, I chose Vue.js 3. To be clear, I have nothing against React, Angular and other front-end frameworks, but at that time Vue.js seemed like a good option due to the low entry threshold and excellent preset tools. Then it was time to choose the component library. It needed to be compatible with Vue.js 3 and have components for working with tables. I considered Element Plus, Ionic Vue, and Naive UI, but due to the availability of customisable components for working with tables, I ended up using the PrimeVue library. The application has a navigation bar with Analyser elements (this is the main screen of the application), Usage (user instructions) and a link to the project's GitHub repository. The main page of the application displays a form for entering data, which can be done in three different ways. The first way is to put a link to the deployed Spring Boot service. In this case, an HTTP request will be made to the specified endpoint and the data will be uploaded automatically. This method is applicable for cases when the service is available from the internet or is deployed locally. Note that loading by url may require additional service configuration in terms of CORS headers and Spring Security. The second and third ways are loading a JSON file or its actual content. The deployed application is located at https://alexey-lapin.github.io/spring-boot-startup-analyzer/ For the analyser demo, I used my own Spring Boot service deployed on Heroku. This service implements the back end of the RealWorld project. The desired endpoint can be found at https://realworld-backend-spring.herokuapp.com/actuator/startup. The service is configured to send correct CORS headers to GET requests from the analyser. Once you load the events using one of the specified methods, the data is visualised in a tree structure. Note that all rows that have child items are hidden. To navigate through this tree, you can use the “>” icons to the left of the item ID, or expand/hide all rows simultaneously using the Expand All / Collapse All buttons. If there are many events, it may take some time to render the expansion of all rows. In the table view, all events are displayed at once. All columns, except for Tags, can be sorted. CI + hosting On one of the previous projects, I was involved in the global DevOps transformation of our client and worked on automating the release cycle processes and building CI/CD pipelines. It was an interesting experience, which now helps me to resolve issues related to writing the source code of products. In this case, as with most of my open-source software projects, I used GitHub as my git hosting, as it provides many useful tools for CI, artefact storage, documentation, project management, static site hosting, etc. For the needs of the analyser, I specifically used Actions and Pages. GitHub Actions is configured to run a workflow on events like “pull request”, “commit to master”, and “push a tag”. Pushing a tag will also deploy the assembled project to GitHub Pages, as well as build the Docker image and send it to Docker Hub. In addition to the analyser’s public instance on GitHub Pages, you can use the Nginx-based Docker image. The latter can be useful, for example, for those cases when Spring Boot services are located on the organisation's internal network, from which there is no internet access, but Docker is available and it is possible to load the image. To start the container, run the following command: docker run -d --name sbsa -p 8080:80 lexlapin/spring-boot-startup-analyzer If you need to access this container through a reverse proxy, then pass the path through the environment variable: (UI_PUBLIC_PATH): docker run -d --name sbsa -p 8080:80 -e UI_PUBLIC_PATH=/some-path lexlapin/spring-boot-startup-analyzer Things to improve In the future, I plan to refine the screen with the analysis results. Plus, it would be useful to add a tab with a summary of event types, their number and total elapsed time, such as the number and total time spent to create beans. Another possible feature is building charts on short pivot tables — especially since PrimeVue provides such an opportunity through the Chart.js library. In tree view and table view, colour coding can be done to highlight long events. Additionally, it is worth adding event filtering — for example, by type. Conclusion The proposed analyser allows one to conveniently visualise the data received from the startup actuator endpoint, estimate in detail the time spent on various types of events that occur during the service startup, as well as generally process startup information more efficiently. The application has a public instance on GitHub Actions and is also available as a Docker image. This application was successfully used on one of Luxoft’s projects to analyse the loading of slowed-down services and helped to detect several classes with suboptimal logic in the constructors.

HelloWorld
1
27.01.2023. ·
5 min

Optimizacija B2B e-fakturisanja inovativnim API rešenjima

E-fakturisanje je inovativna promena za preduzeća kada je u pitanju upravljanje njihovim fakturisanjem i finansijskim operacijama. Preduzeća mogu uštedeti vreme, smanjiti greške i povećati ukupnu efikasnost eliminisanjem fakturisanja na papiru i automatizacijom procesa. Kompanije koriste rešenja za e-fakturisanje kako bi lako upravljale fakturama i stekle uvid u svoje finansijsko poslovanje u realnom vremenu. Od 1. januara 2023. godine e-fakturisanje je zakonski obavezno za srpske kompanije za B2B i B2G transakcije.

HelloWorld
2
16.01.2023. ·
10 min

Increasing your Revenue with Abandoned Cart Feature

Email marketing is a way to promote products or services through email. It is used as a top digital media channel, and it is important for customer acquisition and retention. In this blog, we will build a mechanism that will collect data from an abandoned cart along with a visitor's email address, store it into a custom object, so that after a certain time we can use that data to notify our customers through email that they have uncompleted orders. We will achieve this with Salesforce Commerce Cloud, using some built-in functionalities in business manager and writing a custom code to support it. STORING INFORMATION ABOUT ABANDONED CARTS As we don't have any system object to store the data for all incomplete orders, we need to create our own custom object type. Following the path Administration > Site Development > Custom Object Types, we can find all the custom object types that are already created. To create a new one, we need to click New and provide an ID, which needs to be unique. A Key Attribute is also a unique identifier for the object type. From the dropdown menu, select a Storage Scope that determines whether it is assigned to a specific site or the entire organization. In the image below, you can see what we used for our case. After that, we need to go to the Attribute Definitions tab and add fields that will store information about a specific order. That information is the customer's email address, the cart's total price, an indication if the customer is a registered user or a guest, and a JSON object containing the cart's data before the customer decided to abandon it. We will achieve that by clicking on New and then defining the attribute by choosing a unique ID, a preferred Display Name, and an appropriate Value Type, and clicking Apply. In our case, for the customer’s email, the Value Type will be a String type, for totalPrice a Number, registeredCustomer Boolean, and for the JSON object a Text type. You can check out one example in the image below. To make it work, we also need to group the attributes that we created by going to the Attribute Grouping tab and creating a new Attribute Group by choosing an ID that is unique for this type and an arbitrary name. In our case, the ID will be ‘default’ and the name is ‘Default’, and after that, we click Add. Now we need to assign our attributes to the newly created group. Clicking the Edit link, we will be redirected to the Assign Attribute Definition page where we need to click the three-dot button, and now we have a popup that contains all the attributes of this object type. In our case, we will choose our new attributes along with the type ID required to identify the object later in the code. Now, by clicking the Select button, we have assigned the attributes. In addition, we will add a new Site Preferences group so the feature can be configurable. Firstly, we need to go to Administration -> System Object Types and search for SitePreferences. In the Site Preferences, we need to add two Attribute Definitions. AbandonedCartEnabled is a Boolean that tells us if the feature is enabled or not, and abandonedCartEmail is the email address from where we will be sending the objects. After that, we need to go to Attribute Grouping, create a new group called Abandoned Cart and add all three attributes to it. Now, we will need to set the values for the attributes by going to Merchant Tools->Custom Preferences. Find Abandoned Cart and add the values you want to the two fields as shown in the image below. After all these steps are done successfully, we have our custom object type and site preferences created in the business manager, and now we need to do some coding to implement it on our site. After all these steps are done successfully, we have our custom object type and site preferences created in the business manager, and now we need to do some coding to implement it on our site. CREATING AND HANDLING CUSTOM OBJECTS Firstly, we will create a helper with functions for handling the objects that will be used later in the code. The createNewObject function is used to initially create the custom objects and store them in the database. Start with mapping fields of the product that are important for restoring the basket afterwards, followed by creating a unique ID, which is created by merging the basket UUID and Date.now() timestamp. Now, by calling createCustomObject from customObectjMgr and passing the name of the object as the first parameter and the ID as the second, it will create it and store it in the custom object. Now we need to fill the object fields, and to do that, we will wrap it in a Transaction so it can be saved to the DB. Besides that, we will need to store the basket UUID and the abandoned cart ID to the session that will be used to make sure that we already created an object for the session and prevent it from making another one each time we enter the checkout process. For deleteObject we will just need the ID of the custom object, and by calling the remove() function and passing that ID, it will delete the object. Also, we need to delete the previously stored data from the session. UpdateCartInfo will be used when adding, removing and updating lineItems on the basket level. For that one, we will again just need the ID of the custom object to get it with the getCustomObject function, and similar to createNewObject, we will map the product and update the object by wrapping it all into a transaction. Similar to updating the cart, we will create the updateEmail function that will be used if a guest user changes it at the beginning of the checkout process. For this example, the object will be created in two cases, one for guest customers as soon as we know their email addresses, and one for logged customers at the moment of creating the basket. To cover the guest customer scenario, we need to extend the CheckoutServices.js controller from the base cartridge by using the server.extend function with a module.superModule parameter and appending the 'SubmitCustomer' endpoint. After we check if the feature is enabled in Site Preferences, we will check if the custom object is already created by looking into the data from the session and update the email to make sure it is the latest one. Now, if the basket is available, we will use our custom function createNewObject to create our custom object. Now, as we covered guest users, we need to cover the second case, and that is the registered customer. As we know the customer’s email right away, we will create the object as soon as they add a product to the basket. To achieve that, we will need to extend the AddProduct, RemoveProductLineItem and UpdateQuantity endpoint. As the helpers are already created, the logic is pretty straightforward for all endpoints. So, basically again making sure that the feature is enabled in Site Preferences, that the customer logged in, and if we have the current basket already saved in the session, we can decide whether we should call createNewObject or updateCartInfo. We have just one specific case here and that is when removing a product from the cart, we need to check if the basket is empty, in which case we will call the deleteObject function to delete the whole custom object. The email is sent only if the customer has abandoned the cart, so we need to make sure to delete the object if the customer actually places the order. So, we will now append the PlaceOrder endpoint the same way as we did with the last one using another custom-made function. All custom objects that we created can be found by going to site > Custom Objects > Custom Object Editor, finding our object type name from the list, and hitting the find button. It should be noted that on this page, with the right permissions, we can edit, add and remove our object manually, but these functionalities should be used just for testing purposes while implementing the code. One thing we need to keep in mind is that we have limits set for the number of created object types to 300 and a total of 400,000 custom objects, with a warning at 240,000. Now that we have the procedure to save all the necessary data, we can create a job that runs once at a time, fetches the objects one by one, and sends an email with the cart content to every customer for which we have created an object 15 days before the job executed, and after the email is successfully sent, deletes the custom object to make sure we don't exceed the limit we mentioned above. It is also recommended to set a retention on the custom object itself so we prevent sending really old abandoned carts to customers. CREATING A JOB A good practice when making some integrations is to create a new cartridge and name it with a prefix int as I have done in this example. Now, the first thing that needs to be done is to configure a step type by creating a JSON file as shown below.

HelloWorld
0
08.11.2022. ·
3 min

Još 7 dana za prijave na najveće studentsko takmičenje u programiranju 5 dana u oblacima

5 dana u oblacima je studentsko takmičenje u programiranju, koje Levi9 IT organizuje u saradnji sa Udruženjem studenata elektrotehnike Evrope – EESTEC iz Novog Sada. Posebno će biti interesantno zainteresovanima za Cloud tehnologije, a namenjeno je svim studentima IT orijentacije iz Srbije. Drugo izdanje takmičenja 5 dana u oblacima, koje je 2021. doživelo transformaciju promenom fokusa sa Java tehnologije na Cloud, održaće se od 15. do 19. novembra.

HelloWorld
0
28.09.2022. ·
7 min

Android App architecture: Modularization, Clean Architecture, MVVM [Part 1]

Based on my experience on previous projects, I decided to write an article on how to properly set up the base architecture of an android app which can be easily extended and applied to different kinds of applications.

HelloWorld
0
Da ti ništa ne promakne

Ako želiš da ti stvarno ništa ne promakne, prijavi se jer šaljemo newsletter svake dve nedelje.