Iz ugla programera

Prostor za autentična iskustva, izazove i savete, pružajući uvid u svet programiranja direktno iz perspektive stručnjaka.

Blog Iz ugla programera
17.06.2024. ·
1 min

Novi programski jezik Bend: fleksibilan na CPU i GPU

Paralelno programiranje je dugo bio sveti gral performansnog računarstva. Da je jednostavno implementirati paralelne procese, moderni programski jezici ne bi zahtevali koncepte kao što su semafori, zaključavanja i mutexi. Bend je novi programski jezik koji se ističe po tome što poseduje tri različita runtime-a. Prvi je napisan u C-u i ne podržava paralelno programiranje. Drugi je napisan u Rust-u i omogućava paralelizam, dok je treći pisan u CUDA-i, jeziku razvijenom specijalno za GPU programe, što nije iznenađujuće s obzirom na to da ga je razvila Nvidia. Jedna od ključnih karakteristika Benda je njegova inherentna paralelnost. Ne morate se previše brinuti o tome da li je vaš kod paralelan - ono što može biti izvršeno paralelno, biće izvršeno paralelno. Na primer, izraz ( ( (1 + 2) + 3 ) + 4) ne može se izvršiti paralelno zbog zavisnosti između operacija. 3 mora sačekati da se izvrši 1 + 2, a 4 mora sačekati da se završe prethodni izrazi. Međutim, ako bismo grupisali zagrade drugačije, kao u ( (1 + 2) + (3 + 4)), tada bi 1, 2 i 3, 4 mogli biti izračunati paralelno, a zatim konačno sabrani. Iako Bend automatski brine o paralelizmu, ipak je potrebno obratiti pažnju na strukturu koda. Sintaksa Benda podseća na Python, što olakšava učenje i upotrebu. Važno je napomenuti da Bend još uvek nije u konačnoj verziji - njegove single-thread performanse su trenutno slabije u odnosu na konkurenciju. Tim programera koji stoji iza ovog jezika obećava poboljšanje performansi sa svakom novom verzijom.  

16.05.2024. ·
11 min

Syncing Data from DocumentDB to OpenSearch using Change Streams

Change streams are a feature of Amazon DocumentDB that provides a time-ordered sequence of data change events that occur within a DocumentDB cluster. Change streams can be enabled for an individual collection and can be configured to provide the complete document rather than only the change that occurred. Change streams can be integrated natively with a Lambda function, which gives us wide array of possibilities. In this tutorial, we will demonstrate step by step how to synchronize real-time data changes from a DocumentDB cluster to an OpenSearch domain using change streams and a Lambda function. At the end of the tutorial, we will have an infrastructure as shown in the image above. We will create a VPC, DocumentDB cluster, OpenSearch domain, API gateway, and four Lambda functions. Three functions will be exposed via the API gateway: one for writing data, one for reading data, and one for configuring the DocumentDB collection. The fourth function, which is the most important one, will be connected to the change stream and perform data synchronization. Both the functions and the infrastructure will be written in TypeScript and deployed using CDK. The repository containing the entire code can be found here. Let’s get started! VPC setup We create a VPC using CDK’s construct. This one-liner creates a VPC with a private and a public subnet and sets up network routing. Next, we create three security groups: one for Lambda functions, one for the DocumentDB cluster, and one for the OpenSearch domain. As the Lambda functions will perform CRUD operations on data stored in DocumentDB and OpenSearch, we add ingress rules to the DocumentDB and OpenSearch security groups, authorizing access from the Lambda security group. Additionally, we include a self-referencing ingress rule in the DocumentDB security group, which will be explained later on. DocumentDB setup We create a DocumentDB cluster using CDK’s DatabaseCluster construct. The engineVersion is set to 4.0.0 since this is the only version of DocumentDB that supports change streams. The DatabaseCluster creates a master user secret for us and stores it in Secrets Manager under a name defined in masterUser.secretName. We set the vpc and securityGroup properties to the previously created VPC and DocumentDB security group. To launch the cluster in a private subnet, we set vpcSubnets.subnetType to SubnetType.PRIVATE_WITH_EGRESS. The DatabaseCluster will automatically select private subnets that have only outbound internet access. We also set the removalPolicy to RemovalPolicy.DESTROY to ensure the cluster is deleted when the stack is deleted, avoiding any unexpected costs. OpenSearch setup To set up the OpenSearch domain, we utilize CDK’s Domain construct. The properties vpc, securityGroups, and removalPolicy are set in the same manner as for the DocumentDB cluster. For the vpcSubnets property, we cannot use automatic subnet selection as we did in the DocumentDB setup. Instead, it is necessary to explicitly define exactly one private subnet since we only have one OpenSearch node. For the simplicity of this tutorial, we rely on IAM to authorize access to the OpenSearch domain. The Domain construct does not create a resource-based IAM policy on the domain, known as the domain access policy. This allows us to authorize access using identity-based policies, such as an IAM role for the Lambda function, without conflicting with the domain access policy. If you wish to explore OpenSearch security in more detail, check out the official documentation available here. Lambda functions setup Before we create the Lambda functions, we need to create an API Gateway that will be used to invoke the functions. Similar to other resources, we create the API Gateway using the RestApi construct. We also attach two resources, demo-data and config, to the API Gateway. Later on, we will attach a POST method to the demo-data resource for writing data to DocumentDB, as well as a GET method for reading data from OpenSearch. Additionally, we will attach a POST method to the config resource, which will be used to configure change streams on the DocumentDB collection. Writing data to DocumentDB cluster To be able to write to the DocumentDB cluster, the writer Lambda function requires access to the cluster’s master secret. So, we create an IAM role for our writer function that contains all the necessary permissions. In the inlinePolicies property, we add a new policy that grants access to the cluster’s secret through the secretsmanager:GetSecretValue action. We also include the managed policy AWSLambdaVPCAccessExecutionRole, which provides all the permissions required for running a Lambda function in a VPC and writing logs to CloudWatch. To create the Lambda function, we utilize the NodejsFunction construct. This construct simplifies the process of creating Lambda functions by automatically transpiling and bundling TypeScript or JavaScript code. Under the hood, it utilizes esbuild. We assign the previously created VPC and security group to the Lambda function using the vpc and securityGroups properties. We configure two environment variables: DOCUMENT_DB_SECRET and DOCUMENT_DB_ENDPOINT. These variables store the ARN of the cluster’s master secret and the endpoint of the cluster, respectively. The Lambda function will utilize these values to establish a connection with the DocumentDB cluster. By default, the DocumentDB cluster uses TLS (Transport Layer Security). To establish a connection with the cluster, we need to verify its certificate using the AWS-provided Certificate Authority (CA) certificate. The file global-bundle.pem contains the AWS CA certificate. To make it available to the Lambda function during runtime, we use the afterBundling command hook, which copies global-bundle.pem to the Lambda deployment package. Finally, we attach the Lambda function to the API Gateway as a POST method of the demo-data resource. To connect to the DocumentDB cluster, we utilize the mongodb package. Within the createMongoClient() function, we first retrieve the master secret from Secrets Manager. Then we use this secret, along with the previously bundled CA certificate, to establish a connection with the cluster. In the handler function, we simply instantiate a MongoClient instance and write the requests’ body to the demo-collection. Enabling change streams To utilize change streams, we need to enable them either for the entire DocumentDB database or for the specified collection. Since our DocumentDB cluster is deployed in a private subnet of the VPC, direct access to it is not possible. To overcome this limitation, we create a Lambda function responsible for configuring change streams on the demo collection. This Lambda function is deployed within the VPC and exposed through API Gateway, enabling invocation from outside the VPC. In a real-world scenario, these configuration tasks would typically be performed either through a script during deployment, such as a CodeBuild job, or manually on the cluster if direct access is available (e.g., via a bastion host or VPN connection). For the purpose of this demo, setting up a Lambda function proves to be the simplest solution. The setup for the configuration Lambda function follows the same steps as the writer function, so we can skip directly to the handler code. In the code, we create the demo-collection collection and execute an admin command to enable change streams on it. Event Source Mapping setup An Event Source Mapping (ESM) is a Lambda resource that reads from an event source and triggers a Lambda function. In this case, we use an ESM to read change streams from the DocumentDB cluster and invoke the sync Lambda function. The ESM will handle the connection to the DocumentDB cluster, read the change stream events, group them into batches, and invoke the sync function. In the sync function, we will simply write the entire document to the OpenSearch domain. To perform its tasks successfully, ESM requires the appropriate permissions both at the networking level and the IAM level. The ESM will “inherit” the security group of the DocumentDB cluster and utilize it when establishing a connection to the cluster. This is precisely why we included a self-referencing inbound rule in the security group of the DocumentDB cluster during the VPC setup. This rule allows the ESM to access the cluster successfully. An ESM relies on the permissions granted by the function’s execution role to read and manage items within the event source. Therefore, in the IAM role of the sync function, we include three statements (ESMNetworkingAccess, ESMDocumentDbAccess, ESMDocumentDbSecretAccess) that grant the necessary permissions required by the ESM. The ESMNetworkingAccess statement provides networking permissions, the ESMDocumentDbAccess statement grants DocumentDB management permissions, and the ESMDocumentDbSecretAccess statement allows the ESM to read the master secret of the cluster. We also include an OpenSearchAccess statement, which is utilized by the sync Lambda function itself. The actions es:ESHttpPost, es:ESHttpPut, and es:ESHttpGet within this statement grant the ability to read and write data to the domain or index defined in the resources field. The sync function is defined in the same way as the writer and config functions, using the NodejsFunction construct. So, we can continue to the ESM definition. In the ESM definition, we specify the sync function in the functionName property, the DocumentDB cluster in the eventSourceArn property, and the cluster’s master secret in the sourceAccessConfigurations property. Within the documentDbEventSourceConfig, we define the database and collection from which we want to read change streams. By specifying the value UpdateLookup in the fullDocument property, we indicate that we want to receive the entire document in the change stream event, rather than just the delta of the change. We initially set the enabled property to false for the ESM. We will enable ESM later on, once we have set up change streams on the demo collection by invoking the config endpoint. If we were to enable ESM immediately, since it is created before invoking the config method, it would detect that change streams are not enabled, and we would need to restart it. To establish a connection with the OpenSearch domain, we use the Client class from the @opensearch-project/opensearch package. The Client class relies on the AwsSigv4Signer to obtain the credentials of the sync Lambda function and sign requests using the AWS SigV4 algorithm. This signing process is necessary because the OpenSearch domain uses IAM for authentication and authorization. In the sync function code, we simply instantiate an OpenSearch client, iterate through the change stream events, and write them to the demo-index index. Reading data from OpenSearch domain To retrieve data from the OpenSearch domain, we create a reader Lambda function and attach it to the API Gateway. The reader function requires the same OpenSearch permissions as the sync function to access the domain. We create an IAM role specifically for the reader function and, similar to the other functions, we include the managed policy AWSLambdaVPCAccessExecutionRole. We create the reader function using the NodejsFunction construct. In the function’s environment, we set the OPEN_SEARCH_DOMAIN_ENDPOINT variable and we attach the function to the GET method of the demo-data resource. In the function’s code, we instantiate the OpenSearch client, query the demo index, and retrieve the data. We include the retrieved data in the body of the function’s response, returning it to the caller. Deploying and testing the synchronization Before deploying the solution, it is necessary to enable the service-linked role for the OpenSearch service. When performing operations through the AWS Console, this service-linked role is automatically created when required. Therefore, if you have previously set up the OpenSearch domain using the AWS Console, you should already have the service-linked role created. However, if it is not available, you can create it using the AWS CLI command shown below. The entire CDK code is organized into four stacks: change-streams-demo-vpc-stack: Contains VPC definition and security groups. change-streams-demo-documentdb-stack: Defines the DocumentDB cluster. change-streams-demo-opensearch-stack: Sets up the OpenSearch domain. change-streams-demo-lambda-stack: Creates the API Gateway and Lambda functions. To deploy the entire solution, you can run the npm command shown below. By default, the command will use the account, region, and credentials from your default AWS profile. After the deployment is completed, you will need to retrieve the URL of the API Gateway. Once you have the URL, the next step is to invoke the config endpoint. This will create the demo collection and enable change streams. After invoking the config endpoint, you need to enable the ESM. You can do this by executing the command below. The ID of the ESM can be found as the value of the esm-id output of the change-streams-demo-lambda-stack stack. Alternatively, you can enable the ESM by opening the sync Lambda function in the AWS console, selecting and enabling ESM from the list of triggers of the function. Now you can start adding data to the DocumentDB cluster by invoking the POST method of the demo-data endpoint. Once the data is added, it will be synchronized to the OpenSearch domain. To retrieve the synchronized data, you can invoke the GET method of the demo-data endpoint. The response from invoking the GET method of the demo-data endpoint should contain the same data that was added through the POST method. You can monitor the execution and logs of the Lambda function using the CloudWatch service. After testing the synchronization, you can delete the resources by invoking the command below. Stateful resources, such as the DocumentDB cluster and OpenSearch domain, are configured with the RemovalPolicy.DESTROY and will be deleted along with the stacks. All created resources are tagged with the Application tag, which has the value change-streams-demo. Once the destroy command completes execution, you can double-check if all resources have been deleted by using the Tag Editor of the AWS Resource Groups service. The Tag Editor allows you to search for resources based on their tags. Any remaining resources can be deleted manually. Conclusion In this post, I have demonstrated how to achieve real-time data synchronization from a DocumentDB cluster to an OpenSearch domain using change streams and a Lambda function. The majority of the heavy lifting is handled by AWS on our behalf. For instance, the Event Source Mapping performs all the complex tasks, such as polling for changes and grouping them into batches, while we simply integrate our Lambda function into the flow. The architecture example presented here can be used to enhance the search performance of an existing DocumentDB cluster by replicating its data into a search-optimized OpenSearch domain. This is just one example of the numerous possibilities that change streams offer. Since they are easily integrated with Lambda functions, we have the flexibility to use them in any way we desire. For instance, we could react to events within the DocumentDB cluster and trigger a Step Function or send notifications to users and more. I hope you found this post useful and interesting. If you have any questions regarding the implementation or encounter any deployment issues, feel free to leave a comment below. I’ll make sure to respond as promptly as possible.

09.05.2024. ·
2 min

Bezbednost operativnih sistema

Najnoviji izveštaj kompanije IBM otkriva da su incidenti povrede podataka prošle godine prosečno koštali kompanije 4,45 miliona dolara, što predstavlja porast od 15% u poslednje tri godine. Prema istom izveštaju, tokom 2023. godine zabeleženo je 550 slučajeva kompanija koje su bile pogođene ovim problemom. Ove brojke ukazuju na sve češće susrete sa ovakvim incidentima, a u daljem tekstu ćemo analizirati i uporediti nivoe bezbednosti najpopularnijih operativnih sistema. ChromeOS ChromeOS, operativni sistem nove generacije koji je baziran na Linux-u i razvijen od strane Google-a, koncipiran je sa fokusom na sigurnost. Nudi niz efikasnih bezbednosnih funkcija, među kojima se ističe sandboxing. Ovaj mehanizam omogućava da svaka aplikacija ili proces funkcioniše unutar izolovanog okruženja (sandbox), što sprečava pristup fajlovima, drugim aplikacijama ili kernelu, čime se efikasno štiti od pretnji zasnovanih na vebu. Druga značajna funkcija je Verified Boot, koja svaki put pri pokretanju sistema proverava njegov integritet, osiguravajući da operativni sistem nije izmenjen ili kompromitovan od strane zlonamernog softvera. Uprkos ovim naprednim karakteristikama, ChromeOS ima relativno mali udeo na tržištu - oko 2%. Linux Linux, najpoznatiji operativni sistem otvorenog koda na svetu, prepoznatljiv je po svojim robustnim bezbednosnim mehanizmima, mada njihova efikasnost varira u zavisnosti od distribucije. Distribucija Qubes, koja se zasniva na Fedora Linux-u, posebno je cenjena zbog visokog nivoa bezbednosti zahvaljujući upotrebi Xen virtuelizacije. Ovaj pristup omogućava izolaciju različitih zadataka i aplikacija u odvojenim virtuelnim mašinama, čime se sprečava širenje štete ukoliko jedan segment bude kompromitovan. Međutim, nedavno otkriće backdoor-a u jednom široko korišćenom alatu za kompresiju, koji je pronađen u distribucijama poput Red Hat-a i Debian-a, ukazuje na to da čak ni open source sistem ne može uvek da garantuje potpunu sigurnost. Iako Linux ima skroman udeo na tržištu od oko 4%, njegova modifikacija, Android, dominira u sektoru cloud tehnologija sa 90% zastupljenosti. macOS Apple-ov operativni sistem, zasnovan na Unix-u, ekskluzivno se koristi na uređajima ove kompanije. Visok nivo bezbednosti macOS-a proističe iz Apple-ove politike razvoja i softvera i hardvera, što omogućava integrisanu zaštitu. Kompanija redovno objavljuje sigurnosna ažuriranja, dodatno učvršćujući sistem protiv potencijalnih pretnji. macOS takođe koristi mehanizam sandboxing-a i efikasan alat za enkripciju, FileVault, koji šifrira ceo sistemski disk i zahteva autentikaciju pre pokretanja sistema. Udeo macOS-a na tržištu personalnih računara iznosi oko 15%. Windows Windows je operativni sistem s najvećim udelom na tržištu, koristi se na više od 73% personalnih računara, zbog čega je oduvek bio primarna meta napada. U ranijim verzijama, kao što su Windows XP i prethodnici, korisnici su automatski imali administratorske privilegije, što je omogućavalo zlonamernim programima lako preuzimanje kontrole nad sistemom. Microsoft je često bio na udaru kritika zbog spore reakcije na otkrivene sigurnosne ranjivosti. Danas, sa Microsoft Defenderom, Windows pruža zaštitu u realnom vremenu, uključujući i napredni firewall. Ove unapređene mere sigurnosti značajno umanjuju potrebu za antivirusnim softverom trećih strana, nudeći efikasnu zaštitu integrisanu direktno u operativni sistem.

17.04.2024. ·
5 min

Node.js Lambda Package Optimization: Decrease Size and Increase Performance Using ES Modules

This article explains how to optimize Node.js AWS Lambda functions packaged in ES module format. It also shows an example with bundling and AWS CDK, as well as results for gained performance improvement. Node.js has two formats for organizing and packaging the code: CommonJS (CJS) — legacy, slower, larger, and ES modules (ESM) — modern, faster, smaller. CJS is still a default module system, and sometimes the only supported option for some tools. Let’s say you have a Node.js project, but you haven’t bothered with this before. You may now ask yourself — in which format is my code packaged? Let’s look at some JavaScript code examples: In JavaScript, it is clear by just looking into the code. But in TypeScript, you may find yourself writing code in ESM syntax, but using CJS in runtime! This happens if TypeScript compiler is configured to produce CommonJS output format. Compiler settings can be adjusted in tsconfig.json file and we will show how to avoid CJS output with an example later. There are two ways for Node.js to determine package format. The first way is to look up for the nearest package.json file and its type property. We can set it to module if we want to treat all .js files in the project as ES modules. We can also omit type property or put it to commonjs, if we want to have code packaged in CommonJS format. The second way to configure this is using file extensions. Files ending with .mjs (for ES modules) or .cjs (for CommonJS) will override package.json type and force the specified format. Files ending with just .js will inherit chosen package format. ES modules So how exactly can ESM help us improve Lambda performance? ES modules support features like static code analysis and tree shaking, which means it’s possible to optimize code before the runtime. This can help to eliminate dead code and remove not needed dependencies, which reduces the package size. You can benefit from this in terms of cold start latency. Function size impacts the time needed to load the Lambda, so we want to reduce it as much as possible. Lambda functions support ES modules from Node.js 14.x runtime. Example Let’s take one simple TypeScript project as an example, to show what we need to configure to declare a project as an ES module. We will add just couple of dependencies including aws-sdk for DynamoDB, Logger from Lambda Powertools and Lambda type definitions. The type field in package.json defines the package format for Node.js. We are using module value to target ES modules. The module property in tsconfig.json sets the output format for TypeScript compiler. In this case, ES2022 value says that we are compiling our code to one of the versions of ES modules for JavaScript. You can find additional info for compiler settings on https://www.typescriptlang.org/tsconfig. Bundling To simplify deploy and runtime process, you can use a tool called bundler to combine your application code and dependencies into a single JavaScript file. This procedure is used in frontend applications and browsers, but it’s handy for Lambda as well. Bundlers are also able to use previously mentioned ES modules features, which is the reason why they are important part of this optimization. Some of the popular ones are: esbuild, webpack, rollup, etc. AWS CDK If you’re using CDK to create your cloud infrastructure, good news is that built-in NodejsFunction construct uses esbuild under the hood. It also allows you to configure bundler properties, so you can parametrize the process for your needs. With these settings, bundler will prioritize ES module version of dependencies over CommonJS. But not all 3rd party libraries have a support for ES modules, so in those cases we must use their CommonJS version. ➤ What’s important to mention is that if you have an existing CommonJS project, you can keep it as is and still make use of this improvement. The only thing you need to add is mainFields property in CDK bundling section, which will set the format order when resolving a package. This might help you if you have some troubles switching the project completely over to ES modules. Let’s use a simple function that connects to DynamoDB as an example. Its job is just to read a record from a database. We will create two Lambda functions with this same code. One using the CDK example above, and the other one using the same CDK but without ESM bundling properties. It is just to have separate functions in CommonJS and ES modules so it’s easier to compare them. Here is a bundling output during CDK deploy with esbuild: You can see that ESM version of the function has package size reduced by almost 50%! Source maps file (.map) is also smaller now. esbuild provides a page for visualizing the contents of your bundle through several charts, which helps you understand what your package consists of. It is available here: https://esbuild.github.io/analyze. Here is how it looks like for our test functions: In this case, CommonJS package is improved by bundler only by minifying the code, which got it down to 500kb. Packages under @aws-sdk take up more than half of the package. But with using ES module — first approach when bundling, the package size goes down even further. As you can see, there is still some code in CJS format as some dependencies are only available as CommonJS. Performance results Let’s see now how much improvement is made by comparing cold start latency between ES module and CommonJS version of the function. Small load test with up to 10 concurrent users was executed to obtain the metrics. Below are visualized results using CloudWatch Logs Insights. CommonJS ES modules Numbers above are in milliseconds, so in average we reduced cold start duration by 50+ms, or 17%. Bigger difference is for minimum latency, which was shorter for almost 70ms, or 26%. These are not drastic differences, but from my experience with real-world projects — package size can go down like 10x, and cold start latency by even 300–400ms. Conclusion The improvement from using ES modules can be seen even in the simple example above. How much you can lower cold start latency depends on how big your function is and if it needs a lot of dependencies to do its job. But that’s the way it should be, right? For example, for simple functions that just send a message to SQS/SNS and similar, we don’t need dependencies from the rest of the app — like database or Redis client, which might be heavy. And sometimes shared code ends up all over the place. Even if the improvement in your case is not that big, it still might be worth considering using ESM. Just be aware, some tools and frameworks still have bad or no support for ESM. In the end, why would you want to pack and deploy the code you won’t use, anyway? 😄 Author: Marko Jevtović Software Developer @Levi9Serbia

08.04.2024. ·
2 min

XZ Backdoor - opasan propust open source zajednice

Da li ste se ikada zapitali zašto neka rutinska akcija na vašem računaru, poput otvaranja programa, odjednom traje duže nego obično? Mislili ste možda da je to zbog toga što je računar tek uključen ili "nije se još zagrejao". No, jedan inženjer iz Majkrosofta našao se u sličnoj situaciji, ali sa znatno ozbiljnijim posledicama, koje su mogle uticati na sigurnost celog interneta. Inženjer je primetio da kada izvršava ssh komandu (protokol za sigurnu komunikaciju sa serverima) u terminalu, ona se izvršava sa zakašnjenjem od pola sekunde i koristi znatno više procesorske snage nego uobičajeno. Ovaj neočekivani pad performansi podstakao ga je na detaljnu istragu, koja je postala njegova "zečija rupa" u potrazi za odgovorom. U srcu problema našao je xz, popularan alat za kompresiju podataka u Linux okruženju, koji se koristi ne samo za kompresiju fajlova u kompaktne arhive već i kao eksterna biblioteka za mnoge programe, uključujući i ssh. Na njegovo iznenađenje, otkrio je da je backdoor ubačen u xz-ov kod nekoliko meseci ranije, od strane developera koji je bio aktivan učesnik u razvoju xz-a više od dve godine i koji je uživao veliko poverenje zajednice. Ova osoba uspela je da ubaci zlonamerni kod koji je bio dovoljno dobro "zamaskiran" da prolazi pregled kodova, uprkos tome što su drugi programeri bili zbunjeni njegovom svrhom. Ovaj incident naglašava koliko bi posledice mogle biti katastrofalne da problem nije otkriven na vreme, uzimajući u obzir da Linux serveri pokreću više od 90% interneta. Svaki program koji koristi xz za kompresiju podataka bio bi izložen riziku, omogućavajući napadaču potpuni pristup sistemu. Ovaj događaj nas podseća na to da su ljudi često najslabija karika u sigurnosnom lancu sistema. Takođe, postavlja se retoričko pitanje: Ako je ovaj propust otkriven slučajno, zbog neoptimizovanog koda, koliko još takvih neotkrivenih backdoor-ova postoji unutar Linux-a, a da korisnici o tome nemaju pojma?

03.04.2024. ·
2 min

Novi Google AI model pretvara crteže u 2D igre: Revolucija u razvoju video igara ili novi razlog za brigu?

Već smo navikli na generativne AI modele koji mogu da stvaraju tekst, slike, zvuke, a nedavno su nam najavljeni i kratki, fotorealistični video zapisi. Međutim, nedavni razvoj događaja otvara novo poglavlje u eri generativne veštačke inteligencije. Naime, predstavljen je AI model sposoban da generiše jednostavne 2D igre na osnovu crteža, čak i onih napravljenih na običnom papiru. Ovaj napredak, iako uzbudljiv, susreo se sa oprezom u zajednici developera video igara. Kao osoba sa više od šest godina iskustva u profesionalnom razvoju video igara, vest o ovakvoj tehnologiji dočekao sam sa određenom dozom strepnje, slično reakciji kada je jedan američki startap najavio razvoj AI-a pod imenom Devin. Devin je predstavljen kao potencijalna zamena za softverske inženjere, a na osnovu dostupnih snimaka, moglo bi se reći da je to daleka budućnost. Modeli poput ovog još uvek nisu spremni za praktičnu upotrebu, s obzirom na to da se njihovo "programiranje" video igara više čini kao generisanje video zapisa. Naime, ti modeli ne pišu pravi kod, ne reaguju na ulaze korisnika, niti mogu generisati nove pejzaže izvan postojećih slika. Jedan od glavnih izazova sa kojima se suočava ovakav generativni AI jeste njegova sklonost ka "halucinacijama", gde može spojiti dva objekta u jedan, što bi u realnoj igri bilo neprimenjivo. Na kraju, važno je istaći da i ja lično koristim generativni AI u svakodnevnom radu. Neophodno je prihvatiti da generativni AI neće nestati i da će nastaviti da nam pomaže u radu, olakšavajući nam procese. Stoga, nije potrebno da kao programeri i softverski inženjeri imamo veliku bojazan od gubitka poslova u budućnosti zbog ovih tehnologija. Uprkos početnim strepnjama, čini se da će generativni AI modeli nastaviti da oblikuju budućnost razvoja igara, ali ne na način koji bi u potpunosti zamenio ljudski doprinos. Kao i uvek, adaptacija i inovicija biće ključni za uspeh u ovoj dinamičnoj industriji.

02.04.2024. ·
1 min

Google i Reddit: Od rivalstva do partnerstva u ime AI inovacija

Od trenutka kada je trka u razvoju veštačke inteligencije postala centralna tema u tehnološkom svetu, nakon popularizacije ChatGPT-ja, dva tehnološka giganta, Microsoft i Google, započela su svojevrsno takmičenje. Cilj im je bio da razviju superioran AI model koji bi privukao korisnike svojom efikasnošću i inovativnošću. Microsoft je prvi napravio hrabar korak integracijom AI-ja u sve svoje proizvode, čak idući toliko daleko da je od proizvođača tastatura zatražio dodavanje posebnog AI dugmeta. Google nije zaostajao, sledio je sličan put, težište stavljajući na integraciju AI-ja u svoje aplikacije, s namerom da svaki unos korisnika doprinese unapređenju njihovih jezičkih modela. Međutim, fokusirajmo se na nedavni dogovor između Reddit-a i Google-a, koji označava značajnu prekretnicu u njihovim prethodnim odnosima. Istorijski gledano, odnosi između ove dve kompanije nisu uvek bili harmonični, budući da je Google ranije koristio Reddit za indeksiranje i pretraživanje sadržaja u svrhe poboljšanja vlastitog pretraživača i treniranja AI modela, često bez eksplicitnog partnerstva. Novim dogovorom, Google dobija ekskluzivan pristup Reddit API-jevima, što mu omogućava pristup bogatom korpusu podataka, uključujući i one za koje je ranije bila potrebna posebna dozvola, poput skrivenih podforuma. Ovo ne samo da poboljšava Google-ovu sposobnost treniranja AI modela, već i unapređuje kvalitet ponuđenih informacija. Za Reddit, ovaj dogovor ima dvostruku korist. Prvo, Google će Reddit-u plaćati 60 miliona dolara godišnje za ekskluzivni pristup informacijama, što predstavlja značajno finansijsko pojačanje. Drugo, s obzirom na to da se Reddit u tom trenutku spremao za izlazak na berzu, ovaj dogovor je definitivno pozitivno uticao na njegovu početnu tržišnu vrednost. Ovo partnerstvo nije samo dokaz rastućeg uticaja AI-ja u tehnološkom sektoru, već i pokazatelj kako strateška saradnja može dovesti do obostranih koristi za uključene strane.

05.03.2024. ·
2 min

Koliko smo svoje privatnosti spremni da žrtvujemo na internetu?

Razmatranje politike privatnosti aplikacija koje koristimo često se zanemaruje u žurbi da se iskusi nova funkcionalnost ili usluga. Svesni smo da sve ćešće na internetu nije proizvod ono što nam se predstavlja, već da smo taj proizvod postali mi i naši lični podaci i često na to pristajemo. Ali, koliko su se pomerile granice dozvoljenog za prikupljanje podataka i da li one uopšte postoje? Deljenje informacija na mreži postalo je druga priroda, ali da li se ikada zapitamo šta se dešava sa tim informacijama? Primetno je da čak i ono što smatramo skrivenim ili prolaznim može lako završiti u pogrešnim rukama, kao što pokazuje slučaj britanca Aditja Verme. On je na društvenoj mreži Snapchat šaljivo poslao poruku o tome da će "razneti avion", misleći da je to privatna šala između prijatelja. Međutim, ova poruka je dovela do ozbiljnih posledica, uključujući angažovanje španskih borbenih aviona. Srećom, on nije optužen za terorizam u ovom slučaju, ali mu preti visoka novčana kazna zbog podizanja borbenih aviona koji su reagovali i ispratili avion Edvard Snouden, bivši saradnik američke Nacionalne bezbednosne agencije (NSA), već je upozorio na ovakve opasnosti, ističući kako tehnološki napredak često dolazi s visokom cenom gubitka privatnosti. Njegova otkrića o masovnom nadzoru pokazala su koliko duboko sežu koreni nadzora u digitalnom svetu. Iako je tehnologija donela brojne pogodnosti u našim životima, čini se da smo, možda i nesvesno, pristali na kompromise kada je u pitanju naša privatnost. Ovaj slučaj nas tera da se zapitamo o granicama koje smo spremni da postavimo u ime pogodnosti koje tehnologija pruža. U svetlu ovakvih incidenta, važno je ponovo proceniti kako pristupamo tehnologiji i privatnosti. Moramo da budemo svesni da kompanije kojima ne plaćamo usluge, kao što su Google, Facebook i druge, znaju o nama možda i više nego ljudi sa kojima živimo, a sve češće smo i žrtve sajber kriminala i prodaje naših ličnih podataka na dark web-u. Neki od saveta stručnjaka za zaštitu podataka idu od preporuka da se izbegavaju besplatni komercijalni softveri, kao i društvene mreže, korišćenje bezbednijih alternativa za sveprisutne usluge koje koristimo poput internet pretraživača (DuckDuckGo), pa čak do saveta da nikada na društvenim mrežama ne objavljujemo uživo fotografije, već da ih objavljujemo sa odreženim zakašnjenjem, kako bi veliki broj marketing servisa teže znao gde se tačno nalazimo. Da li smo kao društvo postali previše opušteni kada je u pitanju deljenje naših ličnih informacija?

20.02.2024. ·
3 min

Nova pravila, stari problemi: Apple omogućava sideloading u EU, ali to će staviti developere u nezavidnu situaciju

Apple je u poznat kao kompanija koja se trudi da na sve načine potpunu kontrolu nad iPhone telefonima - svojim najprodavanijim uređajima je počeo da uvodi USB-C port kao standard na svojim laptopovima i tabletima još 2015. godine (prvi put sa MacBook Pro laptopom) odnosno 2018. godine (iPad Pro), dok je na iPhone uređaje USB-C stigao tek prošle godine, što je većina korisnika sa oduševljenjem dočekala. Razlog za takav potez su relativno novi zakoni Evropske unije. Tačnije, EU je donela zakon koji zahteva da svi uređaji ,koji će se od stupanja zakona na snagu proizvoditi, moraju da imajui USB-C standard kako bi se smanjila količina elektronskog otpada. Iako taj zakon ima svoje mane, deluje da je na duže staze bolji za sve korisnike. Nakon toga se šuškalo o još jednom važnom zakonu koji je bio donesen, a koji bi ozbiljno mogao da potrese Apple, a to je zakon koji zahteva od Apple-a da omogući sideloading. Naime, Akt o digitalnim tržištima (DMA) u Evropskoj uniji predstavlja regulatorni okvir koji zahteva od velikih tehnoloških platformi, poput Apple-a, da omoguće sideloading i pristup alternativnim prodavnicama aplikacija na svojim uređajima. Ovo bi moglo direktno da utiče na Apple-ovu praksu naplate provizije od 30% na transakcije unutar App Store-a, omogućavajući developerima da nude svoje aplikacije i usluge bez potrebe da plaćaju ove provizije Apple-u. Kao rezultat, DMA ima potencijal da smanji kontrolu koju Apple ima nad distribucijom aplikacija i digitalnim tržištima, nudeći više slobode i izbora korisnicima i developerima​​​​​​. Apple je krajem januara objavio blog post kojim objašnjava kako će se usaglasiti sa ovim zakonom u EU. Jedan deo tog teksta neodoljivo podseća na odluku koju je kompanija Unity donela (pa povukla) sredinom septembra prošle godine, a to je naplaćivanje 50 centi po svakoj instalaciji ukoliko je aplikacija premašila preko milion ukupnih instalacija te godine (Apple-u, za razliku od Unity-ja, ovo uopšte neće biti teško da prati). Uzmimo za primer Facebook, koji ima oko 400 miliona aktivnih korisnika u okviru EU, i neka oko trećina toga ode isključivo na iOS korisnike. Facebook bi godišnje morao da plaća Apple-u preko 60 miliona evra samo zato što ljudi koriste njihovu aplikaciju. Naravno, kompaniji Meta to ne bi predstavljao neki veliki trošak, ali zamislite da je Flappy Bird ostao na iOS-u - taj developer bi Apple-u bio dužan ozbiljan deo svoje zarade, dok gigant nije “prstom mrdnuo”. Većina developera je besna zbog ovakve odluke, uključujući i neke od najvećih igrača kao što su Spotify, Revoult i drugi. Uprkos otvaranju mogućnosti za sideloading i pristup alternativnim prodavnicama aplikacija, Apple naglašava da će developeri i dalje moći da distribuiraju svoje aplikacije preko Apple App Store-a, pridržavajući se postojećih uslova, uključujući i plaćanje standardne provizije od 30%. Na ovaj način Apple očigledno želi da primora developere da se oslone na njihov App Store, umesto da koriste alternativne prodavnice aplikacija, čime se developerima potencijalno nameće ekonomski manje povoljna opcija plaćanja fiksne takse od 50 centi po preuzimanju aplikacije u odnosu na standardnu proviziju od 30%. Šta vi mislite da će biti krajnji ishod svega ovoga? Pišite u komentarima!

Da ti ništa ne promakne

Ako želiš da ti stvarno ništa ne promakne, prijavi se jer šaljemo newsletter svake dve nedelje.